Jump to navigation

Poupou's Corner of the Web

Looking for perfect security? Try a wireless brick.
Otherwise you may find some unperfect stuff here...


More weekend hacking

So I started adding a Gtk# GUI on top of my previous Cecil / Dot graph hacks (see here and here to see them). Nothing extraordinary about it, in fact nothing interesting enough to show yet ;-)

Actually most of my weekend hacking wasn't GUI related at all. Most of it has gone into refactoring the existing source code to reduce duplication (as they all evolved from the same code base, the new permview.exe tool) and to move the different tools into plugins.

Plugins are often overrated, and overused, but in this case I know some of them requires quite a lot of memory (e.g. finding all public callers) and many planned (i.e. in a dark corner of my mind) plugins will too have high memory requirements. How much ? well probably more than I have right now, so there's no point having all plugins around at the same time unless necessary (or for a limited number of assemblies).

To make this easier the tool operates on working sets which are simply a list of assemblies and a list of plugins (with their options) to load. So it's possible to work with few assemblies (e.g. mscorlib.dll) in more details or work with all (e.g. every dll under /mcs/class/lib/default/) with fewer plugins.

I also did some low-level class to help generating dot files and couldn't resist trying them off for a new kind of graph (funnier than refactoring the existing ones :-). This graph shows the dependencies between assemblies, in this case the Mono.Security.dll dependencies.

The graph shows all assemblies referenced by Mono.Security.dll either directly (like mscorlib.dll and System.dll) or indirectly (System.Xml.dll being loaded by System.dll). It also shows the references between the referenced assemblies, so we can clearly see the cyclic dependency that exists between the System.dll and System.Xml.dll assemblies. Assemblies loaded via reflection (e.g. via CryptoConfig) are missing as they can't be (in most case) known until runtime.

From a security point of view the graph adds two major informations.

  • First the type of caller that can call into the assembly. By default strongnamed assemblies can only be used by fully-trusted code. The CLR enforce this by adding an invisible linkdemand to all publicly methods/classes. Adding a [AllowPartiallyTrusterCallers] attribute at the assembly-level makes the assembly available to partially trusted callers (e.g. code coming from the internet). This can be dangerous if the assembly API isn't designed for such use - which means that most assemblies do not support partial trust.
  • Also assemblies compiled with unsafe code, which means that the compiler inserted a [UnverifiableCode] attribute in the unsafe module, are shown in red. The CLR cannot ensure the code in those assemblies isn't messing up your system, so it's nice to know who they are.

And, of course, being both red (unsafe) and accessible from partially trusted callers is far more risky than only one of them could be.

Like the previous graphs assembly dependencies can get very ugly. In fact the worst cases I've seen is for data providers. They have a lot of dependencies themselves (and some even have unmanaged dependencies not show in the graphs) and they often links to System.Windows.Forms.dll to include design-time support (which adds a lot more dependencies).

4/25/2005 18:35:58 | Comments | Permalink

Mono Security Manager Part V - InheritanceDemand

Another new feature of Mono 1.1.5 was the support of InheritanceDemand. Inheritance demands very are similar linkdemands but easier to understand (mostly because it has less special cases to consider).

Just like linkdemands, inheritance demands do not happen at runtime. Instead they occurs at load time, i.e. in the processing that follows the loading of an assembly by the CLR. This means that, just like linkdemands, imperative inheritance demand do not exists. Inheritance demands are evaluated in two different places during at load time, depending on where the security attributes were applied [1].

Class and Interface

Inheritance demand are used to limit extensibility, via inheritance (hence the name), of classes. A class that wants to inherit from another class must pass the security checks defined by the base class. The same checks can also be required before a class can implement an interface.

This is much finer grained than using the boolean sealed modifier (which would be applicable to all code). For example the abstract class System.IO.FileSystemInfo in assembly mscorlib.dll can only be inherited by code coming from a fully trusted assembly (i.e. FullTrust).

[FileIOPermission (SecurityAction.InheritanceDemand, Unrestricted = true)] public abstract class FileSystemInfo : MarshalByRefObject, ISerializable {

Method (and the likes)

Something similar also occurs for methods, properties, events... Inheritance demands on a methods controls if a method can override the virtual/abstract method defined in the base class. Methods defined in an interface can also be protected.

Like linkdemands, there are many fun things to do with inheritance demands, e.g. you could allow a class to inherit yours only from noon to midnight on weekdays. But, honestly, inheritance demands are almost always used with code identity permissions (e.g. strongnames, publisher, hash, zone, url...). Which also means that their uses will a little more limited with the CLR 2.0 (more on that another time).

Finally it's kind of hard to catch a SecurityException thrown by an inheritance demand as it is done for all class/methods at load time. So you must catch them when loading an assembly - which most people prefer not to do manually. This also makes them harder to test using NUnit :-(

[1] Actually you can apply inheritance demands (and other SecurityAction) almost anywhere. This is because the restrictions for applying security attributes are the same as for normal attributes - i.e. it is controlled by the Attribute class and not by the SecurityAction. However applying security attributes doesn't mean the CLR will evaluate them. This would make an interesting rule to create (using Cecil of course ;-) in an FxCop style tool.

4/21/2005 19:30:48 | Comments | Permalink

More Cecil/Dot graphs

I did more Cecil/Dot hacking this weekend. The previous graphs has given me good ideas of how some user code could exploit some critical methods. It even shows some of the security checks, the declarative security attributes, on the methods (called before entering the method). But this picture is missing a lot of details, like any security checks done inside the methods (i.e. the imperative security checks).

A good example is when access to a file or an environment variable is required. The name of the resource isn't known until runtime (at least from the framework point of view) so declarative security cannot be used. This graphic shows the calls made to Environment.GetEnvironmentVariable(String) from inside mscorlib.dll. We see that, by definition (metadata), anyone can call Environment.GetEnvironmentVariable(String). We also know, or at least expect, that the access to environment variables be protected by CAS. But we can't see it unless we look at the IL code.

Note #1: The graphic shows public types/methods in blue. Bold is used on static methods.

Note #2: Actually the previous graphic also show another problem. The static constructor (.cctor) of the System.Runtime.Serialization.Formatters.Binary.BinaryCommon type ask for an environment variable. The problem is that we can't predict the stack when a static constructor is called - so we can't know if any CAS permissions will succeed or fail. In this case this isn't a big problem as the type (and the .cctor) aren't public and because the environment variable name cannot be influenced by user code. We just have to change the existing call to a security-less version of Environment.GetEnvironmentVariable(String).

Ok, Back to the main issue... we expect to see a CAS permission demand in Environment.GetEnvironmentVariable(String). We could look at the IL itself which would be easy in very small methods, like this one, but can easily gets confusing for large methods (e.g. do all the code paths go thru the security check ?).

System.String System.Environment::GetEnvironmentVariable(System.String) { // code size : 29 .maxstack 8 .locals () IL_0000: call System.Boolean System.Security.SecurityManager::get_SecurityEnabled() IL_0005: brfalse IL_0016 IL_000A: ldc.i4.1 IL_000B: ldarg.0 IL_000C: newobj System.Void System.Security.Permissions.EnvironmentPermission::.ctor (System.Security.Permissions.EnvironmentPermissionAccess,System.String) IL_0011: callvirt System.Void System.Security.CodeAccessPermission::Demand() IL_0016: ldarg.0 IL_0017: call System.String System.Environment::internalGetEnvironmentVariable(System.String) IL_001C: ret }

Note #3: Extracting the IL code with Cecil is very simple. See Jean-Baptiste Evain's sample code.

So this time I wanted to add graphs of the IL code, i.e. generate dot files from the previous IL. This is very similar to what other people have been doing. However my version as some security enhancements (well that's the whole point of it ;-). For example I mark some calls to the security runtime in red and display interal calls with "double lines". It's simple but effective as it makes it easy to see if (and where) some code can bypass a security check inside a method.

Now looking at the same IL code as a colored graph makes it perfectly clear. If the security manager is enabled (first red box) then an EnvironmentPermission instance is created with the variable name and a Demand (second red box) is made prior to returning the value.

My only problem is that dot-ing IL can generate very big bitmaps (this one being small). They compress well on disk but their RAM requirements can be very high to display. I'll need to look at dot's options to see if I can squeeze them a little without loosing readability.

I still have many ideas to visualize code using Cecil and Dot but I think it's about time I put some Gtk# GUI on top of this...

4/18/2005 11:09:30 | Comments | Permalink

Mono Security Manager Part IV - LinkDemand

It took me some time to blog about them but a lot of new CAS features have been added in Mono 1.1.5. You can keep an eye on the CAS status page for more frequent updates on any new feature.

One of the new features is support for LinkDemand. LinkDemands are very similar to demands (which are sometimes called "full demands" when compared to linkdemands). The main difference is that linkdemand are executed at JIT time (instead of at run time). If you google about them you'll see that many articles and books will stop their description right there, while some will give more details (e.g. "it's less secure") without explaining why.

First methods are JITed only once (well that's true for the current versions of Mono but even with dynamic re-compilation the method won't be recompiled at every execution). Evaluating the stack at this time would be pointless. E.g.

public void DoSomethingSafe () { // secure everything CallSomethingCritical (); // clean up } public void DoSomethingEvil () { // prepare evil conquests CallSomethingCritical (); // send bugs and spam to everyone }

JITing CallSomethingCritical when being called from DoSomethingSafe would results in a different decision (stack wise) from JITing it from DoSomethingEvil. So instead of a stack walk (or the often mis-described one frame stack walk) we reverse the problem like this:

When JITing DoSomethingSafe we ask ourself if we have the permission to link to CallSomethingCritical. The same thing happens when we JIT DoSomethingEvil (hopefully with a different decision). Now the name LinkDemand make a little more sense :-).

The problem with reversing is that we loose the context of the call (actually there isn't much, security wise, at JIT time anyway). This is why this is (truely) less secure than a full demand. But linkdemands make sense when you use them properly (and not as a cheap version of full demands). Want a trick ? when in doubt use full demands ;-).

Another difference is that linkdemands can only be made declaratively by the programmer (or in a few special case by the CLR itself). No big lose since imperative link demand wouldn't make any sense at JIT time.

Special cases

There are a few special cases with linkdemands that requires more details:

  • Internal calls are protected by the CLR with something similar to linkdemands. I say similar because it's doesn't quite follow the same rules (you probably guessed that a stack walk wasn't needed/done for icalls). Actually 1.x and 2.0 CLR seems to use different rules about them too. Let's just assume that internal calls aren't made to be called outside a few trusted assemblies (e.g. by any application code) and you'll be safe in the future.

  • Some assemblies aren't designed to be executed under partial trust. In fact all strongnamed assemblies are considered, by default, unsafe for partial trust. The CLR enforce this by adding a LinkDemand for FullTrust on every publicly accessible (public and protected) methods on all publicly available classes in a assembly - unless the assembly is marked as safe by including the AllowPartiallyTrustedCallers attribute.

  • Reflection is cool but mess up the caller logic used by linkdemands. This is because the caller is always some code inside corlib (which is fully trusted). This would results in every linkdemand being granted when using reflection (it's not as bad as it sounds as reflection should only be granted to trusted code IMHO). Anyway the fix to this problem is to turn the linkdemand into a full demand. However this isn't not a perfect fix as the demand has far more chance to fail this way (when evaluating all the other frames in the stack).

There is another special case related to unmanaged code that I'll keep for a further entry...

4/15/2005 15:42:02 | Comments | Permalink

The views expressed on this website/weblog are mine alone and do not necessarily reflect the views of my employer.