Saturday, October 3, 2009

EmDebr 0.5 - multihash

Here it is! I also added a nice 'smart' charset feature. So you only have to specify a hash or a files with hashes and let it try a logical charset per length. This means that it will use a smaller charset when length increases, default:

* length 1-4: all
* length 5: mixalpha-numeric
* length 6: loweralpha-numeric
* length 7: loweralpha
* length 8-10: numeric

You can use the '-s' option to set the offset for these charsets. Default is 4, set it to 5 to have it run charsets on larger lengths. If you want to run a very quick crackjob on your hashes, set '-s 3'. Estimated running time (2009) for -s:

* 3: seconds - couple of minutes
* 4: minutes - few hours (default)
* 5: half an hour - several hours
* 6: days - weeks

You can also tune the bitmap, setting different sizes. Read my previous posting for details about the bitmaps and the other techniques I used in this version.

I release this version only as source, as a lame attempt to stop script kiddies to use this software for illegal purposes. I expect those that want to use it for educational purposes or for legal penetration testing are able to compile this software themselves.


Wednesday, September 30, 2009

Multihash support

I really didn't think I'd care to actually write multihash brute forcers. I'd need to do a lot more comparing, sorting, a lot more hash reversals, etc... But I did it!

It's still work in progress, but EmDebr currently finds hashes quite fast on huge hash lists. Let's tell you some about my progress, the things I ran into and the ideas I borrowed from others :)

So I started out with some functions to compare calculated hashes to all the input hashes. Every round EmDebr generates 12 hashes, which then need to be compared to, for example, 1000 hashes. You don't want to compare all these one by one as that would really slow down the cracker for every additional hash you are searching for. The cracker should still be decently fast cracking an input list of over 100k hashes.

A very common way to search a list for a possible match is to do a binary search on a sorted list. This works fairly nice, but still gets fairly slow on large hash lists.

An idea I borrowed from the author of 'CUDA-Multiforcer' from (probably used by others as well) is to make a shared bitmap. The idea is to map a certain part of all the input hashes to a specific location in a bitmap. For example you take the first 32 bits of the hash and map that to a certain bit and set it to '1'. You do that with every input hash and end up with a bunch of memory filled with 0's and 1's. When you calculated a hash during the cracking process you can do the same mapping and see if there is a '1' on that location. If so, there must be a hash with the same first 32 bits in the hash list. This strategy gets you to only do one lookup for every calculated hash and only start doing a binary search on the full hash list if you have a partial match.

You can specify the size of the bitmap. The larger the bitmap, the more unique bits you'll have in your bitmap, so the less matches you have and thus the less binary searches you'll do. I did some benchmarking on my own system and came up with 1MB as a very reasonable size for both small and large hash lists. I actually tried some variations as well and ended up using 2 bitmaps. The first bitmap is created from the first 32 bits (not using all these bits), the second bitmap is created from the second 32 bits. This was faster then using a single bitmap with double the size.

The next problem was 'reversing', as with my single hash EmDebr a fair amount of hash reversing was used. This reversal takes the input hash and reversely calculates the last round of MD5 (and a few steps more) using a specific part of the current plaintext keyspace. With multihash support we'd have to do this process for every hash, quite often. That's like cracking salted hashes! And it's getting worse, every time after re-reversing we have to sort the full list again, and create the bitmaps!

So that sucks a lot in terms of speed, say 1 Mhashes/s. So I started out with just doing a full MD5 calculation again, not using any reversal at all. So when I got things to work and actually find plaintexts for the input hash list, I started playing with reversal again. The least you can do is do a one time reversal for the input hashes, up to the step where you need some plaintext input. This saves you from doing a single MD5 step and 4 additions. The next step to reverse already needs a part of the plaintext input. Luckily it's just 'W2', the part that contains characters 9-12 of the plaintext. This part doesn't change that often with a regular sized character set. The next (or previous ;)) steps to reverse don't take input on our (<16 sized) plaintexts, so we can just continue up to around half of the last round of MD5. There we run into 'W1' (characters 5-8), we'll have to stop there. In the end this actually allows us to still reverse 8 MD5 steps! And with a normal charset, this still pays off, gaining some extra Mhashes/s. For a small charset it doesn't work very well, but that's just 'numeric' in EmDebr. 'loweralpha' with 26 characters is running fine.

For the sorting I started out with 'default' quicksort, but ended up borrowing the Radix sort implementation from vampyr's Mysql/Sha1/Mssql GPU cracker.

My last improvement was to speed up the binary searches. Especially with larger hash lists, you'll still have to do these binary searches quite often. I now use a small index to limit the range of the binary search. I use the first 8 bits to cut the hash list in 256 parts. This even improved search speed on smaller hash lists!

To finish off, a nice little overview of my progress in terms of speed (using a 'normal' sized hash lists of like 30-40 hashes):

* 52 Mhashes/s using a 512MB bitmap
* 99 Mhashes/s using a 1MB bitmap (seemed bigger ain't always better)
* 107 Mhashes/s using two 1MB bitmaps
* 115 Mhashes/s using partial reversal
* 125 Mhashes/s using an index

With a hash list of 200 000 hashes EmDebr still keeps up, just falling back to around 119 Mhashes/s!

To compare, single hash EmDebr did around 175 Mhashes/s. I'm actually quite satisfied with current results on multihash EmDebr, using less reversing and doing all the comparing stuff! Just have to clean things up, make it display output in a nice way and let it write out results to a file. Stay tuned!

Monday, September 7, 2009

LM2NTLM, the Unicode corrector

So in my very first blog posting I mentioned I would write about:

* A tool for doing not only case correction with a cracked LM hash and the accompanying NTLM hash, but also do something I call unicode correction. I still need to release this tool, but I already implemented the code in rcracki_mt.

I kind of forgot about this tool. It was actually my first attempt at making something like a brute forcer. I'm not releasing source code (yet), I need to clean it sometime.

You can use this tool when you cracked an LM hash and want to know the correctly cased password that results in the accompanying NTLM hash. Usually you'll be done with trying all different uppercase/lowercase variations, but sometimes there is a strange character in the password. For example, this could be a character with an accent that maps to a regular uppercase password in the LM hash.

Some time ago I've been working out a lot of these mappings, per 'Windows language version' (or actually per OEM codepage). LM2NTLM tries all the known mappings per character. If it doesn't find the correct password it will start again, replacing 2 characters with 'strange' ones. Again, if it doesn't find the password, more characters are tried. At this time you might start doubting the validity of the hashes and/or the password you are providing.

How to actually use this tool? Just run it without arguments and you'll be provided with the information you need. If the corrected password is found, the tool shows 'some' unicode output, you might actually want to inspect the "Password in hex" and spot the 'odd' unicode character (2 bytes). You can use this site for finding out which character it really is.

Download: (Windows 32bit .exe)

p.s. You can use LM2NTLM for regular case correction as well ;)

Wednesday, August 5, 2009

Over 50% speed increase on the new EnTibr

Let's keep it short, new plaintext generation, caching:

Old speed: ~200 Mhashes/s
New speed: ~315 Mhashes/s

Just shows the algorithm should not always be the main target of optimization.


Tuesday, July 28, 2009

New and faster EmDebr finally done

A little later then I planned (NTLM tables fixed and currently uploading), but here we go with version 0.4 of EmDebr, my SSE2 enabled MD5 password cracker. The main improvement is the plaintext generation on passwords > 4 characters. There are now two types of cracking threads where one, the old one version, is used for lengths <= 4. So what exactly is different now?

EmDebr generates MD5 hashes for 12 plaintexts simultaneously. In the old plaintext generation I used to continue to generate 12 plaintexts by just moving on to the next plaintext 12 times every round. With the new generation I split up the generation in 2 parts, 'first 4 characters' and 'character 5 and beyond'. The second part is now different for all 12 plaintexts, so the first part can be equal for all plaintexts. This way we only have to change 1 plaintext every round until we ran through all combinations on the first part (26*26*26*26=456976 rounds). At that point we move the second part to the next plaintext 12 times and start over again.

I added another speed up, using some sort of 'character couple' cache. I got the initial idea by looking at brute forcing code by Sascha Pfaller. His implementation generates a fairly large bunch of plaintexts at once before starting to hash them. This implementation didn't really work well with me, but it did give me an idea. So now I'm using an array of shorts (2 bytes) and store all combinations of 2 characters of the charset in it. So with loweralpha it contains aa-zz. So instead of 'generating' a plaintext every round, I just 'set' a plaintext every round.

EmDebr now went from like 135 Mhashes/s to like 176 Mhashes/s on my system!
I might clean up and release a new version of EnTibr as well soon, that one had an even better speed up.


p.s. Haven't tested this code on Linux anymore.

Tuesday, July 21, 2009

The power of a PlayStation 3

After using my PlayStation 3 for some nice brute forcing, I was impressed by the power it has. The Cell processor on it performs faster then the CPU in my quadcore system. Consider the fact that the PS3 was released in like 2005 and my CPU is just one year old!

After writing a brute forcer on it I decided to try and enjoy the power of the machine in a completely different way... I actually used it the right way and the power was overwhelming! Try and play Assassins Creed on it for example :) The graphics, the physics, the smooth feel of the gameplay, just awesome! Check out some ingame footage, I can't wait for part 2 of this title.

So lately I've mainly been using my PS3 for playing several new bought games... and trust me, a PlayStation 3 is a powerful machine! :D

Monday, July 20, 2009

Free Rainbow Tables?

I was supposed to come up with new version of EnTibr and EmDebr with my next blog post, but something came in between. kind of shutdown because of PowerBlade stopping his work/hosting on the project. The project has been a great way of generating and distributing rainbow tables for free, but it takes a fair amount of time to keep it running and especially if you want to keep improving. The news got out and was picked up on Slashdot, creating a nice flood of reactions from people willing to take over the site. I think FreeRainbowTables won't die and will continue in the near future!

Anyway, apparently there were some problems with the last few generated NTLM tables (ntlm_mixalpha-numeric#1-8_0 and ntlm_mixalpha-numeric#1-8_1), causing the indexing process to fail. As PowerBlade didn't have time to fix the tables, he suggested me to just release the original tables on in standard .rt format. I don't really like that format, especially because one table is around 160GB, instead of like 110GB. So I started investigating the issue. While by now I thought I'd knew about every technical detail about Rainbow Tables, again I found myself learning quite a lot about them again! This time I had to master the binary files and especially the indexing part. I'm currently trying to fix the tables so they can be released on and the GARR mirror. Just need a little more time for that.

So after fixing those tables I hope to finish off the new versions of my brute forcers! At least some good news, after fixing the bug I had in Entibr, speed didn't dramatically drop like I've been having with other bugs before :P It's currently doing around 310 Mhashes/s AND actually finding plaintexts!