Weblog
News flash: Cheap certificates are cheap!
How does this affects Mono ?
Framework
Mono, by default, does not trust any CA root - this is not the job of a framework (it provides the plumbing, not the water).
There are several reasons for that (see the FAQ) but honestly there are very few people that needs to trust all 139 (as of today) CA roots (that mozroots could install). In fact the probability of having false certificates issued grows with the number of CA and the more of them you trust, the more likely you'll be affected.
Applications
For applications the main challenge is that they cannot (or at least should not) totally depend on the state of the user/machine certificate store(s).
Why ? It could be empty (e.g. Mono's default), it could have a lot of junk (e.g. old self-signed, test certificates) or it could include every CA known to this (and likely other) world(s).
OTOH (and unlike frameworks) applications generally knows what they want/need in order to execute properly. E.g. if you're using SSL to check for new mail on GMail then you can (and should) easily add a few check and refuse certificates that are not related to the job.
Now since I just told you not to (totally) depend on the certificate store(s) then you might feel you have something to do wrt the above issue. Have a look at this wiki page for some approaches. But if you're already dealing (and checking) for specific hosts then you're likely ok - unless your host(s) match of the fraudulent certificates.
If your code is more general (e.g. it can connect to anything) then you can resort to the browser solution: a blacklist. Here's some sample code that will refuse the known-to-be fraudulent certificates.
// to be used as the RemoteCertificateValidationCallback of an SslStream
public static bool ValidateServerCertificate (object sender,
X509Certificate certificate,
X509Chain chain,
SslPolicyErrors sslPolicyErrors)
{
if (sslPolicyErrors != SslPolicyErrors.None)
return false;
// blacklist of known-to-be fraudulent certificates
switch (certificate.GetSerialNumberString ()) {
case "009239D5348F40D1695A745470E1F23F43": // addons.mozilla.org
case "00D8F35F4EB7872B2DAB0692E315382FB0": // Global Trustee
case "00B0B7133ED096F9B56FAE91C874BD3AC0": // login.live.com
case "00E9028B9578E415DC1A710A2B88154447": // login.skype.com
case "392A434F0E07DF1F8AA305DE34E0C229": // login.yahoo.com_2
case "3E75CED46B693021218830AE86A82A71": // login.yahoo.com_3
case "00D7558FDAF5F1105BB213282B707729A3": // login.yahoo.com
case "047ECBE9FCA55F7BD09EAE36E10CAE1E": // mail.google.com
case "00F5C86AF36162F13A64F54F6DC9587C06": // www.google.com
return false;
default:
return true; // or your own existing logic
}
}
Users
How many people have the keys to your home ? Scrap that - no matter the number, the fewer the better. Now curious about how many CA you're currently trusting ?
~ @ certmgr -m -list -c Trust | grep "Unique Hash" | wc -l 0 ~ @ certmgr -list -c Trust | grep "Unique Hash" | wc -l 140
Yep, I executed mozroots to see how many certificates would be added but it won't stay that high very long ;-). More details in certmgr documentation to remove (all/some) of them.
Update: Instructions for using certmgr to remove the CA root that signed the bad certificates.
3/24/2011 09:43:17 | Comments | Permalink
But but but...
Since all the previously mentioned certificates were issued by a single certificate authority you also have the option of removing only this CA from those mozroots installed. Note that this:
- does not solve the root (pun intended) issue. The same situation can occurs with other CA (from the same or a different company);
- will remove the trust from all certificate signed (past and future) by this CA.
Instructions
First check how many certificates you have installed in your Trust store:
~ @ certmgr -list -c Trust | grep "Unique Hash" | wc -l 140
Next remove the CA root certificate that signed all those bad certificates:
~ @ certmgr -del -c Trust 89B5351EC11451D06E2F95B5F89722D527A897B9
Finally validate that the certificate was removed.
~ @ certmgr -list -c Trust | grep "Unique Hash" | wc -l 139 ~ @ certmgr -list -c Trust | grep "UTN-USERFirst-Hardware"
If the number was decreased by one and the string UTN-USERFirst-Hardware can't be found anymore then this batch of bad certificates won't affect you.
Note: Repeat the above steps with -m if you installed root certificates on the machine store.
3/24/2011 13:33:15 | Comments | Permalink
Easy to (mis)use API
Here's another take at reducing string allocations inside
Gendarme
using the new
Log Profiler.
This time I focused on a very helpful, but easy to abuse, API: StreamReader.ReadLine.
Similar methods suffers from similar fates.
The .NET framework has quite a few helpers like this one.
They work great when quickly hacking a solution but they also have serious limitations in the real world.
E.g. how long is a line ? from a Stream it could be infinite, eventually leading to a OutOfMemoryException.
Same goes for ReadToEnd wrt file size, ReadAllLines... (that sounds like a rule in itself ;-)
Even if you control the line/file size there's still a price to pay: each line becomes a new string. Now that's not a big deal if you actually need, as is, each line. However if you (pretty common pattern) read lines, then parse each/most of them then you get a lot of extra allocations.
make self-test
When doing a make self-test Gendarme read two text files to find which known defects should ignored (i.e. not reported). E.g.
-rw-r--r-- 1 poupou users 3169 2011-01-05 15:32 mono-options.ignore -rw-r--r-- 1 poupou users 55154 2011-02-28 18:54 self-test.ignore
So, that's 58323 bytes for less than 700 lines (including blanks and comments). However the (very simple) file format requires to split each, non-comment, line in two parts:
- an indicator (is this a Rule, Assembly, Type, Method or a # comment); and
- a (rule / assembly / type / method) full name
ReadLine is often a short lived variable.
So what if we were reading this into a, re-usable, char[] buffer ?
Could we drop the allocations by half ? It was worth a try and
StreamLineReader
was born. Here's the total allocations before and after
IgnoreFileList
was updated.
before Total memory allocated: 71512640 bytes in 823879 objects
after Total memory allocated: 71322520 bytes in 823084 objects
190120 bytes in 795 objects
Ok, 190,120 bytes may not a huge gain (that's 0.25% of the allocations required for a self-test). Still it represent 3.25 bytes saved for each byte being read from the files (a good ratio) because other, string and non-strings, allocations are now avoided as well.
Why bother?
IgnoreFileList was not very high in the profiler logs. However MonoCompatibilityReviewRule is at the top, for the same reason, since it download (from MoMA web service), uncompress then read three text files. Here's an extract of the logs:
Allocation summary
Bytes Count Average Type name
25515184 153796 165 System.String
11693296 bytes from:
Gendarme.Framework.Runner:Initialize ()
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:Initialize (Gendarme.Framework.IRunner)
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:LoadDefinitions (string)
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:Read (System.IO.TextReader)
System.IO.StreamReader:ReadLine ()
(wrapper managed-to-managed) string:.ctor (char[],int,int)
string:CreateString (char[],int,int)
(wrapper managed-to-native) string:InternalAllocateStr (int)
1365136 bytes from:
Gendarme.Framework.Runner:Initialize ()
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:Initialize (Gendarme.Framework.IRunner)
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:LoadDefinitions (string)
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:Read (System.IO.TextReader)
System.IO.StreamReader:ReadLine ()
System.Text.StringBuilder:set_Length (int)
System.Text.StringBuilder:InternalEnsureCapacity (int)
(wrapper managed-to-native) string:InternalAllocateStr (int)
1164952 bytes from:
Gendarme.Framework.Runner:Initialize ()
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:Initialize (Gendarme.Framework.IRunner)
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:LoadDefinitions (string)
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:Read (System.IO.TextReader)
System.IO.StreamReader:ReadLine ()
System.Text.StringBuilder:Append (char[],int,int)
System.Text.StringBuilder:InternalEnsureCapacity (int)
(wrapper managed-to-native) string:InternalAllocateStr (int)
1030624 bytes from:
Gendarme.ConsoleRunner:Initialize ()
Gendarme.Framework.Runner:Initialize ()
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:Initialize (Gendarme.Framework.IRunner)
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:LoadDefinitions (string)
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:ReadWithComments (System.IO.TextReader)
string:Substring (int,int)
string:SubstringUnchecked (int,int)
(wrapper managed-to-native) string:InternalAllocateStr (int)
966576 bytes from:
Gendarme.Framework.Runner:Initialize ()
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:Initialize (Gendarme.Framework.IRunner)
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:LoadDefinitions (string)
Gendarme.Rules.Portability.MonoCompatibilityReviewRule:ReadWithComments (System.IO.TextReader)
System.IO.StreamReader:ReadLine ()
(wrapper managed-to-managed) string:.ctor (char[],int,int)
string:CreateString (char[],int,int)
(wrapper managed-to-native) string:InternalAllocateStr (int)
We see the System.IO.StreamReader:ReadLine and also the string:Substring -
a clear hint that (some) lines are being parsed.
Changing the rule to use the StreamLineReader shows how much memory can be saved.
Total memory allocated: 71322520 bytes in 823084 objects
Total memory allocated: 68936880 bytes in 816067 objects
2385640 bytes in 7017 objects
3.3 % 0.8 %
That's much better percentage wise. However the ratio (wrt file size) is much lower
because two of the three files using by the rule do not require parsing the lines,
i.e. what ReadLine returned was usable "as-is" and kept in a HashSet.
Only monotodo.txt, which has an optional text message, needs some extra parsing -
even if only to remove the '-' at the end of the line.
Newer logs show, more clearly, that most allocations are done on the unparsed files - i.e. the optimization did not reach them:
22888072 145733 157 System.String 13264152 bytes from: Gendarme.ConsoleRunner:Initialize () Gendarme.Framework.Runner:Initialize () Gendarme.Rules.Portability.MonoCompatibilityReviewRule:Initialize (Gendarme.Framework.IRunner) Gendarme.Rules.Portability.MonoCompatibilityReviewRule:LoadDefinitions (string) Gendarme.Rules.Portability.MonoCompatibilityReviewRule:Read (Gendarme.Framework.Helpers.StreamLineReader) (wrapper managed-to-managed) string:.ctor (char[],int,int) string:CreateString (char[],int,int) (wrapper managed-to-native) string:InternalAllocateStr (int) 1131424 bytes from: Gendarme.ConsoleRunner:Initialize () Gendarme.Framework.Runner:Initialize () Gendarme.Rules.Portability.MonoCompatibilityReviewRule:Initialize (Gendarme.Framework.IRunner) Gendarme.Rules.Portability.MonoCompatibilityReviewRule:LoadDefinitions (string) Gendarme.Rules.Portability.MonoCompatibilityReviewRule:ReadWithComments (Gendarme.Framework.Helpers.StreamLineReader) (wrapper managed-to-managed) string:.ctor (char[],int,int) string:CreateString (char[],int,int) (wrapper managed-to-native) string:InternalAllocateStr (int)
Right now my options, to further reduce string usages, are a bit limited - at least without changing the file format, which we inherit from MoMA. E.g. the file missing.txt has more than 55000 lines because it covers every assemblies shipped by MS.NET 4.0. Gendarme could easily read (and allocate) entries that are only needed by the assemblies being referenced by the code analyzed - if that data was available.
This will become important because I expect (or at least wishes) for similar rules (e.g. something similar to CA1903:UseOnlyApiFromTargetedFramework) to be added to Gendarme in the next releases. Yet there's more planning needed (other rules requirements) before changing the format.
Still it's nice to know the tooling needed to guide such work is available and simply waiting for time / hackers :-)
3/4/2011 15:15:05 | Comments | Permalink
The Full Price of FullName
Everyone using, even a small part of, Cecil knows it's amazing. Now there can be some inconveniants to use part of something - because the other parts cannot (or rarely) be totally ignored. In Gendarme's case it only uses the reading side of Cecil - but the later would not be so useful without support for writing as well.
This leads to a few things that are not optimal, from Gendarme's point of view, inside Cecil.
The biggest issue is that a feature like writing support removes a lot of caching possibilities.
This can be seen in Cecil's FullName properties (often used in ToString overrides) where,
most of them, will re-generate (i.e. allocate a new string) the full name for each call.
This is not something new - it's actually been that way since Cecil gained write-ability long ago (GSoC 2006). I hoped the situation would be better with cecil-light (and maybe it is to some extent) but the new Mono Log Profiler (re)opened my eye to this issue recently.
Using the log profiler made it easy to see all the string allocations caused (in part) by the FullName properties.
Running mono with --profile=log will enable to log profiler (see man mono for more options) and the
resulting output.mlpd (default name) file contains the results. To generate a text report you then use the
mprof-report tool. E.g.
mono --profile=log bin/gendarme.exe --config rules/rules.xml --set self-test \ --log self-test.log --ignore=self-test.ignore --severity=all --confidence=all \ bin/gendarme.exe bin/gendarme-wizard.exe bin/Gendarme.*.dll mprof-report --traces --maxframes=8 output.mlpd > report
The resulting report log file contains lots of details but, for this blog entry, I'll focus solely on the Total memory allocated line.
Phase 1: Caching and avoiding the cache
Removing the duplicate allocations is simple. A new extention method, GetFullName, was added to
call and cache FullName. However this traded memory at the expense of hitting the cache very often (i.e. extra lookup time).
While updating Gendarme's code base it become quickly apparent that, in most cases, the full name was not really needed.
I.e. a check for the Namespace and Name properties (both available without the string allocation cost) were enough.
So to avoid the cache another new extention method, IsNamed was added. The first result were significative:
before Total memory allocated: 84653632 bytes in 1014775 objects after Total memory allocated: 73385936 bytes in 905715 objects diff 11267696 bytes 109060 objects 13.3% 10.7 %
Phase 2: API
Some Gendarme's framework API also promoted the use of the full name. Again to avoid hitting the cache they were changed, one by one, to use separate namespace and name parameters.
Inherits
before Total memory allocated: 73385936 bytes in 905715 objects after Total memory allocated: 73422560 bytes in 906391 objects
Notice that memory usage actually grown to fix the API. That's because:
- nothing more, allocation wise, is saved (i.e. phase 1 has all the gains);
- some rules needs a bit more data to work with the split namespace/name versus the full name.
Implements
before Total memory allocated: 73422560 bytes in 906391 objects after Total memory allocated: 73425864 bytes in 906958 objects
Again a small memory increase, for the same reasons as Inherits above.
HasAttribute
before Total memory allocated: 73425864 bytes in 906958 objects after Total memory allocated: 71064328 bytes in 807886 objects
Here a bit of duplicated code was removed, leading to less code to analyze (i.e. it's a self-test, running Gendarme on Gendarme), in turn requiring a bit less memory.
Contain[Any]Type
before Total memory allocated: 71064328 bytes in 807886 objects after Total memory allocated: 71050024 bytes in 807698 objects
Another small drop. Some (now) unused extention methods and unrequired GetFullName usage were removed.
Cecil's HasTypeReference
Cecil itself use the FullName properties (JB removed a few cases recently, he had early access to my data ;-)
and has some API that requires its use, e.g. ModuleDefinition.HasTypeReference.
That could be worked around by using ModuleDefinition.GetTypeReferences and some, nice looking, LINQ-y replacements.
before Total memory allocated: 71064328 bytes in 807886 objects after Total memory allocated: 72072960 bytes in 834140 objects
Memory goes up again! Why ? for the same reasons as FullName, i.e.
Cecil's ModuleDefinition.GetTypeReferences and GetMemberReferences are allocating a new array on each call.
Again this provides no useful value to read-only applications, like Gendarme, so the results were (again) cached.
before Total memory allocated: 72072960 bytes in 834140 objects after Total memory allocated: 71285616 bytes in 817145 objects
So the final increase (the cached arrays) is a lot smaller than my first attempt :-)
Conclusion
I'll let the numbers speak for themselves:
original Total memory allocated: 84653632 bytes in 1014775 objects final Total memory allocated: 71285616 bytes in 817145 objects diff 13368016 bytes 197630 objects 15.8 % 19.5 %
There are likely a few other, indirect uses of FullName or GetFullName that could be avoided.
I suspect most will be found (and fixed) before 2.12 is released - anyway there's no real harm in them.
Beside confirming old suspects, another fun aspect of profiling is that you'll notice a lot of things when reading the logs, some of them surprising because they challenge/defy your expectations... and give real, nice ideas for further optimizations. Stay tuned :-)
3/1/2011 20:10:23 | Comments | Permalink
The views expressed on this website/weblog are mine alone and do not necessarily reflect the views of my employer.
