114: HD
HD Moore (https://twitter.com/hdmoore) invented a hacking tool called Metasploit. He crammed it with tons of exploits and payloads that can be used to hack into computers. What could possibly go wrong? Learn more about what HD does today by visiting rumble.run/.
Sponsors
Support for this show comes from Quorum Cyber. They exist to defend organisations against cyber security breaches and attacks. That’s it. No noise. No hard sell. If you’re looking for a partner to help you reduce risk and defend against the threats that are targeting your business — and specially if you are interested in Microsoft Security - reach out to www.quorumcyber.com.
Support for this show comes from Snyk. Snyk is a developer security platform that helps you secure your applications from the start. It automatically scans your code, dependencies, containers, and cloud infrastructure configs — finding and fixing vulnerabilities in real time. And Snyk does it all right from the existing tools and workflows you already use. IDEs, CLI, repos, pipelines, Docker Hub, and more — so your work isn’t interrupted. Create your free account at snyk.co/darknet.
Listen and follow along
Transcript
Did you know that in 1982, a robot was arrested by the police?
Yeah, get this.
It was standing on North Beverly Drive in Los Angeles, and it was there handing out business cards to people.
It could talk too, and it was telling people random robot things.
Well, it was causing a commotion.
People were just standing around it staring.
Traffic jams, honking.
It was making a scene.
The police wanted to put a stop to it.
They looked around and in the robot to try to find who was controlling it, but they couldn't figure it out.
So they started dragging it off, and the robot started screaming, Help!
They are trying to take me apart.
The officer disconnected the power source and took the robot into custody.
They put it in the cop car and drove it down to the Beverly Hills police station.
It turned out it was two teenage boys that were remotely controlling it.
They borrowed their father's robot to pass out his robot factory business cards.
It's funny how time changes our interest in things.
If a robot stood on the same corner today, handing out business cards, it would hardly be noticed.
But in 1982, that was quite the scene.
Sometimes it just takes us a while to get accustomed to the future.
These are true stories from the dark side of the internet.
I'm Jack Reesider.
This is Darknet Diaries.
This episode is sponsored by my friends at Black Hills Information Security.
Black Hills has earned the trust of the cybersecurity industry since John Strand founded it in 2008.
Through their anti-siphon training program, they teach you how to think like an attacker.
From SOC analyst skills to how to defend your network with traps and deception, it's hands-on, practical training built for defenders who want to level up.
Black Hills loves to share their knowledge through webcasts, blogs, zines, comics, and training courses all designed by hackers, for hackers.
But do you need someone to do a penetration test to see where your defenses stand?
Or are you looking for 24-7 monitoring from their active SOC team?
Or maybe you're ready for continuous pen testing where testing never stops and your systems stay battle ready all the time.
Well, they can help you with all of that.
They've even made a card game.
It's called Backdoors and Breaches.
The idea is simple.
It teaches people cybersecurity while they play.
Companies use it to stress test their defenses.
Teachers use it in the classroom to train the next generation.
And if you're curious, there's a free version online that you can try right now.
And this fall, they're launching a brand new competitive edition of Backdoors and Breaches, where you and your friends can go head to head hacking and defending just like the real thing.
Check it all out at blackhillsinfosec.com/slash darknet.
That's blackhillsinfosec.com/slash darknet.
This show is sponsored by Delete Me.
DeleteMe makes it easy, quick, and safe to remove your personal data online at a time when surveillance and data breaches are common enough to make everyone vulnerable.
Delete Me knows your privacy is worth protecting.
Sign up and provide DeleteMe with exactly what information you want deleted, and their experts will take it from there.
DeleteMe is always working for you, constantly monitoring and removing the personal information you don't want on the internet.
They're even on the lookout for new data leaks that might re-release info about you.
Privacy is a super important topic for me.
So a year ago, I signed up.
Delete Me immediately got busy scouring the internet looking for my name and gave me reports of what they found.
Then they got busy deleting things.
It was great to have someone on my team when it comes to protecting my privacy.
Take control of your data and keep your private life private by signing up for Delete Me.
Now at a special discount for my listeners, get 20% off your Delete Me plan when you go to joindeleatme.com slash darknet diaries and use promo code dd20 at checkout the only way to get 20 off is to go to joindelete me.com slash darknet diaries and enter code dd20 at checkout that's join delete me.com slash darknet diaries code dd20.
You ready to get into it?
Do you have your sixth cup of coffee today?
I did, yeah.
I finished the whole pot.
You feel, you sound like a guy guy who's just really turned on to like, you know, 11.
Like
you talk fast, you, you build things quickly.
I mean, it's, it's just moving all the time for you.
Okay.
So
what's your name?
Uh, H.D.
Moore.
And how did you, um, what was some of the early stuff that you were doing security or hacking-wise when you were a teenager?
I was an internet hoodlum.
Um, got my start on the old, BBS days,
go to hang out with a friend of mine.
He'd fall asleep early, leave his Mac there with his various BBS accounts and start dialing around, figure out what he can get to, download the zines, figure out how to dial into all the fun Unix machines in town.
How to dial into all the fun Unix machines in town?
See, back in the 90s, there weren't a lot of websites that you could just spend your time endlessly scrolling through.
But there were a bunch of computers configured to accept connections from outsiders.
And the way to connect to these computers wasn't over the internet, but simply to dial up that phone number directly and see if a computer picked up.
And if a computer picks up, now it's time to figure out what even is this machine and why is it listening to people dialing into it?
And you could find some weird stuff listening for inbound connections.
Stuff you probably shouldn't be getting into, but the system just was not configured to stop anyone.
HD lived in Austin, Texas, and was curious to find if any computers were listening for connections in his town.
So he started dialing random numbers to see if any would be picked up by a computer.
At one point, my mother was working as a medical transcriptionist.
And the great thing, you know, kind of, you know, early days of internet is that to do that, we had to have a whole lot of phone lines going to the house.
We had two or three regular pots lined.
We had an ISDM line and two computers.
And, you know, she went to bed pretty early.
So as soon as she was down, I was up.
And I was running tone log across the entire 512 area code pretty much every night for years.
And then whenever you find something interesting, try to figure out what it is and what you can do with it.
Some of the fun highlights from back then are like turning the HVAC on and off at the various department stores, dialing into some of the radio transmission towers and playing with that stuff.
You know, this is
obviously well before I was like 18 and was too concerned about the consequences.
But just that whole process really got me into security, security research, and eventually, you know, the internet.
This was really fun for HD.
Poking around in the dark, trying to find something interesting, and then getting lost in that system for a while.
He was fascinated by it all.
Eventually, the internet started forming a little more, and IRC picked up in popularity.
This was just a chat room, and HD was spending a lot of time in the Frack chat channel.
Now, Frack is the longest-running hacker magazine.
The first issue was published in 1985, and by the 90s, they had quite a trove of information.
If you wanted to learn how to hack or break computers, start by reading every issue of Frack, and by the time you're done, you'll be pretty knowledgeable of hacking.
So the FRAC chat channel felt like home to HD, and he loved hanging out there, learning about hacking.
We're all using our silly aliases and playing with exploits and generally causing havoc between each other.
And out of the blue, I get a message from somebody saying, hey, you're looking for a job.
I'm like, yeah, actually, I am.
And he's like, well, you're not too far.
How far are you from San Antonio?
I'm like, well, I could drive there.
So he sent me the interview with the Computer Sciences Corporation, which is now just called CSC.
And they were doing work for, I think at the time it was called AFWIC or it eventually became AIA, but the Air Intelligence Agency.
So U.S.
Air Force's
intelligence wing.
And they were basically building tools for various red teams inside the Air Force.
And I was like, that sounds like a lot of fun writing exploits to the military.
I'm all about that.
So I was a really terrible programmer and I'm not much of a better one these days.
But it was a fun first job to go down there and
get these somewhat vague briefs about we need a tool that listens on the network for packets and does these things with them, or scans network looking for open registry keys and does this other stuff.
So that was my first kind of professional experience of building offensive tooling.
I think it's kind of weird that a recruiter for a DOD contractor was looking in the frack chat room to find people to come build hacker tools in order to test the defenses of the Air Force.
But that's what happened.
HD was now using his hacking skills for good.
And while he was in high school, even at some point while working for this contractor, they asked him to see if he can hack into a local business.
That business had actually paid for a security assessment and wanted to see if they were vulnerable.
And it was a lot of fun.
We basically just walked in and owned everything.
It was great.
Outside, inside, you know, their HP 3000 servers, everything between.
had a blast doing it.
And we went back to CSE and say, hey, we'd like to start doing more commercial pen tests.
And they came back and said, nope, we're federal.
That's it.
So we took the whole team and started a startup.
That was digital defense.
HD loved doing security assessments for customers and this is a penetration test.
Customers would hire them to see if their computers were vulnerable and they did other things too, like monitoring for security events and help secure the network better.
But there was a problem, a big one, if you ask me.
Back in the late 90s, exploits were hard to come by.
See, let me walk you through how a typical pen test works.
First, you typically want to start out with a vulnerability scanner.
This will tell you what computers are on the network, what services are running, what apps are running, and maybe even give you an idea of what versions that software is running too.
Because sometimes when you connect to that computer, it'll tell you what version of software it's running.
Now, as a pen tester, once you know the version of an application that a computer is running, you can go look up to see if there's any known vulnerabilities.
Maybe that's an old version that they're running.
And here's where the problem lies.
Suppose that, yes, you did find a system that was not updated and was running an old version of software that has a known vulnerability.
It's simply not enough to tell the client that their server is not patched and needs to be updated.
The client might push back and say, well, what's really the risk for not updating?
And so that's why a pen tester has to actually exploit the system to prove what could go wrong if they don't update.
They need to act like an adversary would.
But to get that exploit, so that you can demonstrate to the client that this machine is vulnerable, that's the hard part.
At least it was in the 90s.
Some hacker websites would have exploits that you could download, but those were often pretty old and out of date.
So then you might start feeling around in chat rooms, trying to see who's got the goods.
And if you're lucky, you get pointed to an FTP server to download some exploits.
But it has no documentation.
And who knows what this exploit does?
It could be an actual virus.
And as a professional penetration tester, you really can't just download some random exploit from the internet and launch it on your customer's network.
No way.
Who knows what that thing does?
It could infect the whole network with some nasty virus or create some back door that other hackers can get into.
So back then, there just wasn't a place to get good exploits from.
And especially there wasn't a place to get the latest and greatest ones.
As you start rolling into the 2000s, what happened is all the folks who previously were sharing their exploits with the researchers, with kind of the community, they all basically started either just getting real jobs and stopped sharing their tools, or they thought there was ethical issues with that.
But basically it all dried up.
It turned into some commercial firms like Core Impact was started around the same time to commercialize exploit tooling.
Other folks just decided they weren't doing it anymore or they got in trouble.
And so if you're a security firm trying to do pen tests for your customers, it was really difficult to get exploits back then, really difficult to know whether they're safe or not without rewriting every bite of shellcode from scratch.
And so the challenge of just getting the right tools and exploits, you had to build a lot of it in-house.
Well, this company that he was working for didn't really have the ability or expertise or resources to develop their own exploit toolkit.
But HD, being someone who's fiercely driven and part of this hacker culture, was acquiring quite a bit of exploits and learning how they worked and was able to code some of his own.
But these exploits were unorganized.
They were scattered all over his computer.
The documentation wasn't there.
It was hard to share it with some of his teammates.
And that's why HD Moore decided to make Metasploit.
Metasploit is an exploit toolkit, which basically means it's a single application that has loads of exploits built into it.
So once you load it up, you can pick which exploit to use, input some parameters, and launch it on the target.
It was not so great.
But it was a basic collection of vulnerabilities that HD knew and could trust that weren't filled with viruses.
This little tool he built was helping him do security assessments.
And now that he's made a framework, he can continually add new vulnerabilities to make it better.
But there are new vulnerabilities being discovered all the time.
So it was an endless job to keep adding stuff to Metasploit.
Yeah, I mean, this is a combination of like finding vulnerabilities myself, sharing with friends, reporting some of them, not reporting others the time, and then just being my friends sharing exploits all day long.
And I wrote some that weren't very good, but I'd write stuff all the time.
And then you get access to one of the really interesting ones or really high-profile ones and play with a little bit and see what you can do with it.
What ended up being the first version of Metasploit was very menu-based, very terminal-based, where you kind of picked the exploit, pick the NOP encoder, the exploit encoder, and the payload and put them all together and then send it.
By the time we got to Metasploit 2, we threw all that out the window and came up with ideas that you can assemble an exploit like Legos.
So it wasn't, you know, prior to this, most exploits had maybe one payload, maybe two payloads.
Yeah, a payload.
A payload is what you want your computer to do after a vulnerability gets exploited.
Imagine a needle and syringe.
The needle is the exploit.
It gets you past the defenses and into the system.
But an empty syringe does nothing.
The payload is whatever's in the syringe, the thing that gets injected into the computer after it's penetrated.
So what is a typical payload?
Well, it could be to open the door and give you command line access, or it could be to upload a file and execute it on that computer you just got into, or it could be to reboot the computer.
The exploit is the way in, and the payload is the action taken once you get in.
And yeah, the exploits that you would get your hands on back then, they had like built-in payloads.
Changing the payload wasn't always even an option, unless you had access to the source code of the exploit and could build your own payload.
And even even if you did that, what happens the next time when you want to use that exploit with a different payload?
You'd have to recompile the whole thing with something new and then fiddle with it to get it to actually work.
And of course, you don't want to run some payload that someone else made on one of your customers' computers unless you can examine the source code and see what it does.
HD saw this was a problem and modularized how you build an attack.
He made this easy with Metasploit, giving you the option to pick the exploit, pick the payload, and then choose your target.
It made hacking a thousand times easier.
So instead of being stuck with one payload to one exploit, you could take any payload, any exploit, any encoder, any NOP generator, and stick them all together into a chain.
And it was great for a bunch of reasons.
A lot more flexibility during pen test.
You could experiment with really interesting types of payloads that were non-standard.
And because everything is randomized all the time, a lot of the network-based detection tools couldn't keep up.
Because everything was randomized?
This is actually a really clever thing he added to the tool.
So if you put yourself in a defender's shoes, they obviously don't want exploits being run in the network.
And they want to identify them and not let those programs run, right?
And a defender might even make a rule in the antivirus program that says, hey, if there's a program that is this size and has this many bytes and is this long and it's called this, then it's a known virus.
Do not let this program run.
Well, what Metasploit did was randomize all these parts.
They'd give it a random name and a random size and all kinds of random characters simply so that antivirus tools would have a hard time detecting it.
And it makes sense for Metasploit to try to evade antivirus because securing your network should be multi-layered.
The first layer would be to make sure the computers in your network are up to date and on the latest patch.
And then the next layer should be to have them configured correctly.
If both of those fail, then antivirus can inspect what's happening and try to stop an attack in progress.
But if antivirus is blocking it, it hasn't even tested whether that system is secure or not.
So it needs to go around antivirus tools to actually test the server.
And a good pen tester will test multiple layers to make sure each layer of defense is actually working.
So by definition, Metasploit was evasive by default.
Now, at the time, HD was using this tool to conduct penetration tests on people who wanted to see if their network was hackable.
HD was one of the initial people to join this company, but he wasn't in any sort of leadership role or a manager or anything.
So imagine for a moment you're HD's boss, and HD shows you this homebrew exploit toolkit, which is programmed to seek out and exploit known vulnerabilities in computers and payloads built into it.
Now, clearly, in the right hands, this is a weapon.
It's an attacker's dream come true.
Some of the vulnerabilities in it are high quality and make them very dangerous, giving you access to pretty much anything at the time.
Him bringing in Metasploit to work was like bringing in a bucket of hypodermic syringes with their safety caps off.
And some of these were picked up off shady underground places.
Some of them were DIY homemade.
And with syringes, you typically see them in the hands of highly skilled professionals like doctors or people who need beneficial medicine or drug addicts.
So a bucket of syringes can be extremely dangerous or extremely beneficial.
There's no real middle ground.
And it was the same with Metasploit.
It was a bucket of some pretty scary exploits that if he let loose in the office would be a pretty big problem.
So bringing in a toolkit like this to work, well, HD's employer was not supportive of this tool.
I guess more accurately, they were terrified of it.
They did not want to be associated with anything I was working on.
And at the same time, they were kind of stuck with me because I was running most of the pen test operations.
Why were they terrified of it?
There's a lot of fear of exploits and liability.
The worry was that if we released an an exploit and something, you know, someone bad used it to hack in somebody else, somehow my company had become liable.
So they wanted to stay as far away from it as they possibly could.
It didn't help that our primary client base were credit unions, which were kind of naturally conservative and probably still are.
They didn't want to know that the people that they hired for security assessments were also releasing and open sourcing exploit tools on the internet.
This is an interesting dichotomy, isn't it?
On one hand, if you're going to be testing if a company is hackable, you need these attack tools, these weapons.
But nobody ever asks a pen tester, where are you going to get your weapons from?
They just assume since you're a hacker, you know how to do it.
But it's not like you can just type a few commands to get around some security measures.
That's like reinventing the wheel every time you want to do an assessment.
You need tools for the job, a set of attacks that you know work well and you can trust that won't put malware on your customer's network or cause harm.
But that's a lot of work to make sure of.
And if you make a hacking tool like this for yourself and maybe put it out there for someone else to use, well, that does sound like it could come back and bite you.
If someone uses it to actually commit a crime with, how much are you liable for that?
So he had to make a decision on what to do with this Metasploit tool.
If his work wasn't going to help him with it, what should he do with it?
Well, it's one of those things where, on one hand, they wouldn't support it.
On the other hand, we desperately needed this tool to do our job.
And it became a nights and weekends thing.
So I'd clock out of work and I'd go spend the rest of the night not sleeping um working on exploits working on shellcode and not particularly good exploits but i got better eventually and finally got to the point that we had something that was like worth using all on its own that wasn't just a crappy script equal or like a rewrite of a bunch of known exploits it was actually something that had some legs to it and you know the that led to i think my first trip was to hack the box malaysia to talk about it um
and it was a great experience to really get feedback about how different it was from what other people were doing at the time that really kind of helped kind of give me motivation to keep working on it.
It also helped me find people to work on it with.
So I met Spoon M shortly after.
I met Matt Miller, Escape right after that.
They joined the team and we just kind of kept it going as this kind of side project for the next few years.
So in 2002 is when he first shared Metasploit with others, which immediately got a few people so interested in it, they wanted to help make it.
And with a few people helping him, in 2003, he decided to release Metasploit publicly for others to download and use.
After all, it was providing him a lot of value to do his job better, so it would probably make it easier for other penetration testers to do their job too.
He also decided to give it away free.
And importantly, he made it open source so anyone could inspect the code to verify there's nothing too bad going on in there.
So Metasploit.com was created.
And that was where we first started posting some interesting variants of Windows shell code that we came up with that were much smaller than what was available otherwise.
That eventually that became where we shared the Metasploit framework code.
The downside, of course, is it gave everyone else a target to go after.
So as soon as we started posting versions of Metasploit framework to Metasploit.com, we started getting DDoS attacks, exploit attempts.
It got so bad that one guy actually couldn't hack our server.
So he hacked our ISP, ARP spoofed the gateway by hacking the ISP's infrastructure, and then used that to redirect our web page to his own web server.
So he couldn't hack our web server to face it, but could just redirect the entire ISP's traffic just to build a defaced Metasploit.com.
Wait, the Metasploit website was getting attacked?
By who?
Well, in the early days, everyone hated Metasploit.
My employer hated Metasploit, or our customers hated Metasploit.
They thought it was dangerous.
All the Black Hats, all the folks who were trading exploits underground, they absolutely hated it because we're taking what they thought was theirs and making it available to everybody else.
So it's one of those things where the professionals in the space hated it because they thought it was a script kitty tool.
The black hats hated it because they thought we were taking away from what they had.
And, you know, all the professional folks and
employers and customers thought it was sketchy to start with.
So it took a long time to get past that.
But in the meantime, we're getting DDoS DDoS attacks.
We're having people try to deface the website.
We're having folks spoof my identity and, you know, spoof all kinds of terrible things internet under my name.
You name it.
Someone decided to attack HD for publishing exploits.
They couldn't figure out a good attack on him, so they spent time figuring out where he worked and decided to attack his employer.
They scanned the websites that his employer had and found a demo site.
It wasn't the employer's main site.
It was a tool to demonstrate how to crack passwords.
Well, this demo site was running the Samba service, but it was fully patched.
So there shouldn't be a way to hack into this through the Samba service.
HD even tried attacking it with Metasploit, but couldn't figure out a way in.
But there was someone who did know of a Samba vulnerability.
They developed their own exploit and attacked HD's employer's website and tried to get inside the system.
But their payload didn't work that well and it crashed the server.
So I got this alert saying the machine was basically shut down and crashed.
We're capturing all the traffic going in and out of the machine, just for fun to start with.
But by doing that, we're able to carve out the initial exploit.
Wow, this is fascinating.
Because HD was capturing all traffic going into and out of that machine, he was able to find the exact code that was used to exploit the Samba service, which is incredible.
I mean, it's like finding a needle in a haystack.
But then as he examined this code that was used to exploit the system, he realized this was a completely unknown vulnerability to everyone, which is called a zero-day exploit.
HD was able to analyze this and learn how to use it himself.
Did some analysis on it, contacted the Samba team saying, hey, there's a really awful remote O-day in Samba.
And so we wrote our own version of that exploit, put it on Metasploit.com.
And that was kind of the beginning of a long, long war with, I don't even know which group it was, but they spent the next two weeks DDoSing your website for leaking their exploit and not only leaking it, but writing a better version.
That's brilliant.
Because someone didn't like that HD created Metasploit.
They attacked his employer, which made him discover their exploit.
And he reported that exploit to the Samba team so they could fix it.
And then he added it into his tool, Metasploit.
This made his attacker so much more mad at him.
And he continued to get attacked like this all the time.
Folks like emailing my boss telling them to fire me, things like that.
We've had...
Yeah, why are people wanting you to be fired?
They felt that publishing exploits was irresponsible and I was a liability to the company and they didn't want me to have a job because of what I was doing in my spare time.
Huh.
Did they have a point?
Did you feel it with them?
It was good motivations.
Try harder.
Okay.
So
the idea that somebody's going to be upset with a side project you're working on on the weekends to the point where they're going to say,
I need to get this guy HD.
I'm going to ruin him.
I'm going to email his boss and tell his boss to fire him.
That That sounds like council culture to me before they even had the term council culture.
I guess it's not that different.
I feel like
maybe it was the equivalent of a moral ethical dilemma for them at the time.
They thought somehow I was doing something like that was morally wrong and therefore needed to be punished.
But yeah, there's definitely a lot of that.
There was pressure not just from black hat researchers and from customers who didn't like what I was doing, but also from like other security vendors saying, well, if you want business with us, then you have to bury this vulnerability.
You can't talk about this one.
Whoa.
So when he would find a vulnerability in one of the companies that were a business partner of his employer, that company was absolutely not happy when HD published the exploit and added it into Metasploit.
Because remember, Metasploit makes hacking so much easier, which means if it's in the tool, it's now easy to exploit that company's products.
So they'd get mad at him and ask him to take down the blog posts that talk about this vulnerability and remove it from the tool.
And they would even threaten to take away the partner status that they had with his employer if he didn't comply.
Things were getting pretty ugly, and his employer was growing increasingly unhappy with HD.
He was frequently finding himself in the crosshairs of many attacks.
But this is his territory.
Hacking, attacks, defending.
That's what he does during the day as his day job, but it's also what he does at night for fun.
And he even dreams about this kind of stuff.
So if someone attacks AHD more, you know, he's going to have fun with that.
Uh, what happened is, um, some vulnerability we published was being actively exploited by uh some black guys who were building a botnet, and they were so mad about it, they decided they're going to use that botnet to DDoS Metasploit.com.
What they didn't realize, though, was like Metasploit wasn't a company, Metasploit was just like a side project I was running in my spare time.
And I thought the whole thing was hilarious that they were spending all this time DDoSing it.
But I did like the fact they're DDoSing an ISP that I liked working with.
So, this botnet was flooding both of his DNS names, metasploit.com and www.metasploit.com.
It was sending so much traffic that this site was unusable by anyone and was essentially down.
HD investigated this botnet a bit and discovered where the botnet was being controlled from.
He found their command and control server or C2 server.
And they just happen to also have two command and control servers.
So, you know, light bulb goes off.
It's like, well, let's point www.metasploit.com to one of their C2s and the bear domain name to the other one and just sit back and wait a couple of weeks, see what happens, right?
So what happened is because those are the control servers with the botnet and the botnet was DDoSing its control servers, they got locked out of their botnet until we changed the DNS settings.
So we essentially hijacked their own botnet to basically flood their own C2 indefinitely until they finally emailed us a week later saying, please can we have it back?
Wait, what?
They emailed you?
Yeah, because they didn't know how else to get a hold of us.
So they basically lost their botnet.
And we said, okay, well, we'll don't DDoS this again.
They went, okay, we won't.
And that was the end of that.
And we never got DDoSed again.
We're going to take a quick ad break here, but stay with us because HD is just getting started with the stories that he has.
This episode is sponsored by Vanta.
In today's fast-changing digital world, proving your company is trustworthy isn't just important for growth, it's essential.
That's why Vanta is here.
Vanta helps companies of all sizes get compliant fast and stay that way with industry-leading AI, automation, and continuous monitoring.
So whether you're a startup tackling your first to SOC2 or ISO 27001 or an enterprise managing vendor risk, Vanta's trust management platform makes it quicker, easier, and more scalable.
Vanta also helps you complete security questionnaires up to five times faster so you can win bigger deals sooner.
The results?
According to a recent IDC study, Vanta customers slashed over $500,000 a year in costs and are three times more productive.
Establishing trust isn't optional.
Vanta makes it automatic.
Visit vanta.com slash darknet to sign up for a free demo today.
That's v a n t a vanta.com slash darknet.
Who do you associate yourself with?
Because I'm feeling like you've got like three legs in three different buckets here.
And one leg, you're standing in the
frack, you know, IRC channel, which is black hat hackers typically at the time, right?
And these are the people who may be either just, I don't know, hacktivists or cyber criminals proper.
And then you've got, you know, your relationship with the DOD.
And then you've got your professional relationship where you're trying to show yourself, like, look, I've got some real chops here.
I can do this kind of penetration work for a fee.
I'm a professional, you know, this kind of thing.
And I've got actually a tool that I'm developing that can be used for professionals.
So how do, where, where in this scenario do you like feel like you're most at home?
Good question.
I definitely felt like an outsider in all those groups.
The Frock channel went through a big change right around 2000 or so, where it used to be some pretty well-respected hacker researcher types and got taken over by a group of trolls that called themselves FRAC High Council.
And those folks and I did not get along.
And that led to this multi-year, just constant trolling and chaos and things like that.
Even professionally, though, I didn't really have anyone I could really hang out with besides my coworkers and have some good friends there.
But there wasn't.
I almost kind of felt like an outsider in all three of those camps, I guess.
Yeah, because I know know about this sort of infighting in the kind of the hacker communities when a hacker thinks they they're hot stuff, they post something, they make a website, whatever.
Other hackers will try to dox them and attack their website.
And it's just constantly doing that.
Did you feel like that's kind of what this was?
Was just hacker versus hacker?
Like, look, I'm a smarter hacker than you are.
Or did it feel like, no, you're not one of us.
Get the hell out of here kind of attack?
It definitely wasn't friendly.
You know, some friends and and I would always like go after each other's stuff, and it wasn't a big deal.
You say, hey, look, check your home directory.
There's a file there or whatever it is, right?
But these are folks who they would steal your mail spool, they'd publish on the internet, they would forge stuff on your name, they try to get you fired, they try to get you arrested, they do everything.
This is prior to swatting, of course, but this was pretty much everything they could do to ruin your life.
This was no holds barred, we're ruining you, and you know, good luck fighting back.
Um, so this is definitely not the fun kind.
Now, by this point, HD and the team working on Metasploit Metasploit have found lots of new unknown vulnerabilities themselves, stuff that the software maker has no idea is even a problem.
And they do this by scanning the internet, attacking their own test servers, and trying to break their own computers.
But what do you do when you find an unknown vulnerability in some software?
Well, the best avenue is to find a good way to report it to the vendor, right?
But HD has had a bit of a history with reporting bugs to vendors.
When I was in teenage years and still kind of at high school, uh i was working on a bunch of the nt4 exploits for fun like the old hdr buffer for flow and things like that and you know while i was plotting around one day i found a way to bypass their uh country validation for downloading i think it was like nt service pack for from microsoft so instead of it looking at your ip address doing geolocation it'd look at a parameter you put in the url instead and you can basically download you know the high encryption version of nt sv6 from russia or wherever else which you know was not a good thing at the time because of all the export controls so i contacted the microsoft security team which is pretty nascent back then and said hey you can like bypass all your export controls.
This is probably not good.
And they're like, Well, what do you want?
I'm like, I didn't really want anything, but what do you got?
And they said, Well, what are you looking for?
I'm like, Can I have an MSDN license?
That'd be awesome, you know.
And that was kind of the beginning of a long series of just really weird interactions with the security team there.
Fast forward.
I'm trying to remember what an MSDN license was.
MSDN was the license that gave you access to all the operating system CDs and media for everything Microsoft made.
So if you had an MSDN license, you basically have a, you can install any version of Windows you want, any version of Exchange server, all that stuff.
So as a hacker or someone doing security research, it was a gold mine because you have all the bulk installers and data all in one place.
Got it.
Okay.
So, you know, fast forward to my first startup and finding vulnerabilities in Microsoft products and doing a lot of work on like ASP.NET misconfigurations and other stuff we've run into during pen testing.
And Microsoft did not like having vulnerabilities reported.
They do anything they can to shut you up.
They did not like having someone releasing exploits for vulnerabilities in their platform.
The first startup I worked at was on Microsoft Partner.
So we had a discount for MSDN and things like that for internal licenses.
And a gentleman at Microsoft kept calling our CEO saying, hey, you need to stop letting this guy publish stuff.
You need to fire this person or we're going to take away your partnership license.
And so they kept putting pressure on my coworkers, on my boss, and the CEO to get rid of me, basically, because of the work I was doing to publish vulnerabilities.
And that just made me angry, right?
Like I had got to chip my shoulder pretty early on about that.
And by the time I got to the Hack of in the Box contest in Malaysia to announce Metasploit, they had a Windows 2000, was it Windows 2003 server?
I think it was being announced at that time.
And they had a CTF for it.
I was like, great, I'll do the CTF.
So CTF stands for capture the flag.
It's a challenge that a lot of these hacker conferences have, where they put a computer in the middle of the room and see who can hack into it.
In this case, it was a fully patched Windows computer, and HD was curious if he could find a vulnerability to get into it.
So he created some tools to send it random commands and inputs, anything that he could send to it to just try to cause it to malfunction.
And sure enough, he did get a fully patched Windows computer to malfunction.
So he examined the data that he sent to this computer to cause it to malfunction, and he was able to use that to create an exploit, which got him remote access to the system.
Now, since this was an unknown bug to Microsoft and Microsoft was there at this hacker conference sponsoring the the thing.
He went up to them and told them about it.
They're like, great, report it to us.
I'm like, no, it's mine.
Like, am I going to get a reward for it?
What are you going to do with it?
Like, I found this vulnerability.
It's mine.
Do what I want to with it.
And so I reported it to the Hackinbox.
Like, hey, Microsoft's trying to pressure me to not disclose this thing that I found.
Like, that's not the point, right?
The point is, yeah, I found a bug in your server and I'm going to talk about it.
Like, and I'm going to share it with you.
But the idea is to go publish it afterwards.
And they shut the whole thing down.
So I heard secondhand that Microsoft threatened to pull sponsorship of the Hack and Mucks conference if they let that vulnerability get published.
So the whole thing got swept under the rug.
See, at the time, Microsoft didn't take their security as seriously as they should.
They weren't publishing all the bugs that they were finding or rewarding people for the bugs they found.
And as HD tells it, they were asking people to not publish bugs publicly.
They thought it was just better to hide some of these attacks so that nobody knows about it.
But around this time in 2002, Bill Gates sent a famous memo to everyone at Microsoft, which said, security is now now a priority of the business and they started a new initiative called the Trustworthy Computing Group.
Well HD saw that this bug he found was causing problems with the conference and he liked the conference and didn't want them to lose their biggest sponsor.
So he agreed to just sit on this bug and do nothing with it.
Six months later, someone else found the same bug and reported it to Microsoft and they were able to fix it.
And it was only then that HD published his version of it.
So the short version is I'm more than happy to tell the vendors about it, but I'd also want to make it public at some point.
Like these are vendors that at the time were sitting on vulnerabilities for more than a year or two years, maybe never disclosing it.
They had no motivation to ever disclose a vulnerability you reported to them and they would do anything they could to pressure you not to.
Microsoft was probably one of the biggest offenders at the time of pressuring researchers to not disclose any vulnerabilities they found.
Do you know if there was a even like a vulnerability list that they had published at the point at that time?
I think Microsoft, I mean there were CBEs at the time time and Microsoft had their security advisories, but the security advisories were just the tip of the iceberg.
There was so much stuff being reported to them that they would just shut down.
The challenge with keeping these secret, whether it's because you're the vendor and don't want people to know about it and it's bad marketing, or whether you're a black hat and trying to use it to break into systems, is that nobody else out there can protect themselves.
They can't test themselves.
They don't know whether they're actually vulnerable, whether the security product they bought to
prevent exploitation is actually working, right?
So one of the great things about having a publicly available exploit for a recently disclosed vulnerability is you can make sure that all your mitigations, all your controls, all your detection are actually working the way they're supposed to.
And everybody else did not want that.
At the time, Microsoft's browser was Internet Explorer.
And with the chip on his shoulder from dealing with Microsoft in the past, HD decided to see how many vulnerabilities he could find in Internet Explorer.
Basically, we're myself, a couple of friends, we put together some browser fuzzers.
We used the browser's own JavaScript engine to just find hundreds and hundreds of vulnerabilities.
We tested every single ActiveX control across Windows and just found bugs in all of them at once.
So we basically created this mass vulnerability generator and we're sitting on probably like six, 700 vulnerabilities at the time and the vendors were just not moving on it.
He kept reporting bug after bug to Microsoft.
But from his perspective, nothing was getting done.
And so now, what do you do when you've told the vendor about a bunch of bugs and they didn't act on it and you have hundreds more and it got to the point that like we just gave up we said you know what we're gonna do entire month we're gonna drop node eight every single day for a month straight and we'll still have hundreds left over afterwards and it was that particular sequence and in that particular event that i think finally killed activex and internet explorer
why why do you think that well after the 30 or 40th active x vulnerability reported them we're like hey guys we have two or 300 more we can keep we can go keep going all year at this point
and it was a good indication that they realized there was no safe safe way to implement ActiveX control load and Internet Explorer.
Microsoft was realizing the security in their products wasn't cutting it.
They needed to do better, and they were working on that.
In fact, what they started doing was offering jobs to people who were reporting bugs to them.
So if you were someone who is previously reporting a bunch of vulnerabilities to Microsoft, all of a sudden you got a job offer instead.
I mean, there's a...
also the amazing security research group called Last Ages of Delirium out of Poland.
And three of the four folks that were part of this group joined microsoft um during this time
well did they contact you
um we're we're friends i met them in um malaysia and i'd see them at conferences and stuff like that i definitely got a few offers from microsoft early on but you know i kind of pushed back with ridiculous terms saying you know no way in hell essentially um mostly because i felt like they didn't really have the best interest of the community at heart they definitely they would shut down anything i was working on and for the most part it was true folks who took a job at microsoft um after doing fully vulnerability research before, you never heard a peep out of them again.
Can you imagine if that happened, if HD got hired by Microsoft?
They might have tried to close down Metasploit altogether.
And what a loss that would have been.
Because Metasploit was starting to pick up some traction.
And while it was hated by many, it was being used by many more.
Pen testers all over were beginning to use it as one of their primary tools to test the security of a network.
It was shaping up to be a vital and amazing tool as a pen tester because it made their job so much easier than before.
As the need for pen testers rose, the need for better pen tester tools rose too.
And of course, the whole time, Metasploit was free and open source, so the community could just look at the source code and verify there wasn't anything malicious getting installed on someone's computer once you hack into it.
The security community was slowly adopting it and liking it more and more every day.
Well, as time went on, Microsoft really did step up their game on handling bugs found by researchers.
They were patching things much quicker and were learning that they cannot control the bugs that outside researchers discover.
And that's kind of a hard thing, even for companies, to understand today.
If someone finds a bug in your product, you can't control what that person does with that bug.
You can try to offer a bug bounty reward to them, but that doesn't mean researchers will take it.
They might sell it to someone else or publish it publicly for everyone to see.
Software vendors cannot control what people do with the bugs they find.
And people like HD, who was just publishing vulnerabilities all the time, were making that point crystal clear.
Microsoft has an internal conference that's just for Microsoft employees.
It's called Blue Hat.
And at some point, they started inviting security researchers from outside Microsoft to come talk at it.
HD knew one of the researchers who was giving a talk and was invited to come co-present at Blue Hat.
So HD got to go to this exclusive Microsoft conference and present to their developers.
I just imagine like
your talk is just like,
here are the 400 things wrong with Microsoft.
Yeah, there's a lot of that.
It was like, you know, one good example.
So back in, was it 2005 or so,
I was on the flight over to Blue Hat and I was playing with a toolkit that I was calling like Karmenasploid at the time or Karma meets Metasploit.
Karma was a way to convince wireless clients to join your fake access point and then immediately start talking to you and try to authenticate to you like you're a file share or printer.
So, essentially, if you had your Wi-Fi card enabled, let's say on an airplane, and someone was running this tool on a different laptop in the same airplane, they would then join your fake access point, try to access company resources automatically, give you their password most times, and then provide a lot of exploitable scenarios where you can actually take over the machine.
So, we thought it'd be fun to run this tool on the actual airplane as we're flying to Blue Hat.
And lo and behold, we end up collecting a bunch of password hashes from Microsoft employees in the process.
You little stinker.
It was fun times.
Where are are you on this whole responsible disclosure thing?
Do you want to get this stuff fixed ASAP?
Or are you more, like, where do you,
what do you think you should do with a vulnerability if you find it?
After going down that path a few hundred times, the fastest way to get a vulnerability fixed is to publish it on the internet that day.
You know, whether that's responsible or not, it's effective.
Well, he has a point.
It's true.
If you find a bug and want it fixed as fast as possible, make it known to the world in the biggest biggest and loudest way, and it will get fixed fast.
But even though that's the fastest path to getting a bug fixed, it's not the responsible way to do it because doing that exposes a lot of people who can't do anything to stop that attack.
It means criminals can use it before it's fixed.
And this puts a lot of people at risk, which means you're probably doing more damage than helping.
It's better to privately tell the software maker and give them time to fix it.
But then when they aren't fixing it and you've given them plenty of time,
then they might need a little fire under them to get them moving on it.
Sometimes to get a company motivated, you've got to give them a little bad PR.
It definitely depends on the vulnerability.
These days have been leaning towards like kind of a 98 disclosure policy where you tell the vendor about it for 45 days, then you tell somebody else about it as a dead man switch.
And if the vendor sits on it and it leaks, the other person's going to publish it no matter what.
I've been using that strategy by working with US CERT for the last few years, where whenever I publish a vulnerability to a vendor, they get 45 days of, you know, only them having access to it.
And then 45 days later, it goes to US CERT, or sorry, CERT CC.
And they're basically guaranteed to publish it after 45 days.
So the great thing about that model is you're kind of splitting the responsibility.
You're making sure that the vendor takes it seriously and gets the patch out of in time.
But you're also not, you know.
having to publish it directly in the internet.
So having a third party like that really reduces the ability of the vendor to pressure any individual researcher from not disclosing because it's already in the hands of another
party at that point.
There are a few groups that have adopted this same model.
Trend Micro has the zero-day initiative, and Google has Project Zero.
Both of these groups look for vulnerabilities and report them to the vendor, and then give the vendor 90 days to fix it, and then they're going to publish it publicly.
So the vendor knows if they get a bug report from any of these groups, they have to act quick and get it fixed before it becomes public because that would be a PR nightmare.
And it's wild to see major tech firms like Google playing this sort of hardball game with software makers.
But this has been working pretty well.
It's also interesting to note that HP bought Trend Micro, and a few times the Zero Day Initiative has found vulnerabilities in HP products, which didn't get fixed in that 90-day window.
And so the Zero Day Initiative published HP vulnerabilities publicly.
It was wild and refreshing to see them even treat their parent company the same way as everyone else.
Yeah, it's great.
I mean, I think it's effective.
Sometimes you have to, I mean, the folks I chatted with at HV about their like, yep, that's the only way that team's going to get the resource they need to fix the product is if we publish it at zero day.
At some point, Metasploit got a new feature called Meterpreter.
Meterpreter was the brainchild of Matthew Miller Escape.
And, you know, a lot of other folks worked on it, but he was really the architect behind it.
Meterpreter is a payload.
Remember, the payload is the action you want to happen after your exploit opens the door for you.
But the Meterpreter payload is kind of like the ultimate payload.
It lets you do so much on the target system that you just hacked into you can look at what processes are running you can upload a file to that system or download a file it helps you elevate your privileges or grab the hash file where the passwords are stored i mean think about that for a second let's say you use metasploit to get into a computer and with one command hash dump it knows exactly where the password file is on that computer and it just goes and grabs it and downloads it to your computer so you can just start cracking passwords locally if you want.
You don't need to know where the password files are stored on that computer.
Meterpreter knows that for you.
You just need to know the one command, hashtum, and you got them.
But Meterpreter does so much more than this.
It lets you turn the mic on and listen to anything the mic is picking up.
It lets you turn the webcam on and see what that computer can see.
It lets you take screenshots of what the user is doing right now.
It lets you install a key logger if you want to see what keys the user is pushing.
Meterpreter is incredible, but with a payload like this, it makes a Metasploit Metasploit so much more dangerous.
I mean, all these features can be easily abused by the wrong person and can cause lots of damage.
On the vendor side, it was scary for them because instead of exploits being these really, you know, simple payloads that they would drop, they could easily detect.
Now
exploits could drop anything.
They could drop
TLS encrypted connect backs.
They could drop basically mini malwares instead that are able to automatically dump password password hashes and communicate back over any protocol you want.
So we made the payload side of the exploitation process incredibly more complicated and way more powerful.
This is kind of one of those points where some of the features of Metasploit, especially around Meterpreter, start getting really close to the malware world.
Right.
And I think that's where I want to head.
But like, you're not just...
doing a proof of concept of, okay, look, I can get into your machine and I, here's, you know, who am I or something, and what process ID I'm running as.
You're building this tool.
Meterpreter gives you full access to that computer, which allows you to screenshot, do keyboard, you know, sniffing, whatever.
All these things that are a lot more like thumb in your eye kind of thing.
And I don't know if that's taking it too far.
Like, does it feel like that's what I'm,
it's not just a proof of concept.
It's a, we can, we can completely like destroy this machine if we wanted, which I guess you have to kind of prove that in order to show, you know, the veracity of this vulnerability, but it just, it's almost going too far for me.
What do you think?
Uh, well, one of my favorite things with Interpreter is we had a way to load the VNC desktop sharing service in memory as part of the payload itself.
And we had it wired up in Metasploit.
So you would literally run the Metasploit exploit and you'd be immediately get a desktop on your screen, be able to move the mouse cursor, be able to type on their keyboard.
It was immediate remote GUI access to a machine over the exploit channel itself, which is just mind-blowing at the time for payloads because it didn't depend on RDP or anything like that.
It didn't depend on the firewall being opened because they do a connect back to you and then proxies it.
It was just an amazing delivery.
That specific payload.
blew so many minds that it was really easy for us to show the impact of an exploit.
If you're trying to show an executive after doing a pen test, hey, we got into your server.
Here's a command prompt of us doing a directory listing.
That's one thing.
But if you're showing that you literally take over their server and you're moving the mouse on their desktop within two seconds of connected to the network, that is an entirely different level of impact that you can show.
It also let us build a lot of other really complex, really interesting use cases where it really shows what the impact of the exploit is.
It isn't just like, oh, you've got a bug and you didn't patch it.
And now I've got a command chill.
It's like, no, no, I have all this access to your system, whatever it happens to be.
Yeah, I guess that's kind of what drew me to Mass Play as well is like, oh my gosh, it's not just the exploit.
It's what you do with the exploit after you get in.
but as you were saying the um the meta interpreter started uh getting close to being its own malware explain what you mean by that a lot of the um malware payloads even today are written in c and they've got these kind of advanced uh communication channels and c2 contact mechanisms and all this kind of boilerplate stuff that they do like uh providing the ability to chain load payloads download more stuff talk to backends uh bounce between different backends uh we got maturpa to the point that it actually had the same capabilities as some of the more advanced malware that are out there.
And that's when it started getting a little squiffy for me because it's like, we don't want to be in the malware business.
Like, we're here to show the impact of exploits and let people test our systems and to generally, like, you know, demonstrate the security impact of a failed security control or missing patch.
But we're not here to persistently infect machines.
And Metropolita got very, very close to that line.
The thing that really separated it from actual malware is the fact that it was always memory-based only.
It was never on disk at all.
This is a strange territory to be in.
Metasploit is a tool that's sole job is to hack into computers.
Whether you have permission to do that or not, that's the purpose of it.
But it seems to be the intent of the person using it that tells us whether Metasploit is malware or a useful tool.
So the Metasploit team had to be very careful on how far they took this tool.
Now, this is a multi-
open source, you know, multi-developer project.
Did you have some sort of manifesto that said, or a meeting that said, okay, guys, here's, we're going to, we're going to push this all the way goes, except no persistence.
Like,
was there a manifesto of like, like you just said, you know, you don't want to leave your customers weaker.
This is a secure, this is a professional tool.
It's like
something written out there.
It was never like a written manifesto because it wasn't like a ethical boundary.
It was just a practical boundary.
Like you're not going to use Metasploit for a pen test if it at least garbage a load of your machine afterwards or backdoors it in a way that's difficult to fix.
Some exploits require temporarily creating like a backdoor user account or otherwise creating something that would otherwise create more exposure.
And we're always really careful to document what the after exploit scenario looks like.
Okay, after you run this thing, you need to do this other thing.
So we created these like post-cleanup modules that would remove the trace of whatever the thing was.
But that was something that I always agonized over because I really hated.
having to create any kind of like have to lower the security of the system as part of the exploitation process.
I was like that was counterintuitive.
That was kind of going against what we're trying to do in the first place.
Yeah, I know.
And maybe I'm not explaining it well, but it just seems like you're putting your thumb right in the customer's eye.
And then you're like, well, we don't want to hurt you.
You know?
Well, that's the thing.
You're trying to be a professional adversary.
And so you have to have the most possible, you know, brutal, malicious.
approach to the problem in a sense that you're going to use the same technique someone else would.
But then you need to draw the line about where you leave the customer afterwards and what the actual impact of the attack is.
Okay, so we heard HD has many adversaries, right?
Cyber criminals don't like him publishing their weapons and making them ineffective.
Old school hackers don't like that he's making hacking so easy that a script kitty can do some amazing stuff.
And vendors don't like that he's publishing their bugs.
He's getting hit on all sides by these people.
But there's one more group that's also not happy about Metasploit.
Law enforcement.
There were crimes committed with Metasploit.
Yeah, that's my first experience writing Windows shellcode.
The first Windows shellcode ever published by Metasploit ended up in the blaster worm almost immediately afterwards.
See what I mean?
There was a massive worm that was using the information that he published to do dirty work out there.
And I just read an article today that said in 2020, there were over 1,000 malware campaigns that used Metasploit.
And so what happens in this situation when you're making tools that criminals are using?
Well, let's go back and look at a few other cases.
I did an episode on the Mariposa botnet.
The people who launched this botnet all got arrested, but they weren't the ones who developed the botnet.
The butterfly botnet was created by a guy named Acerdo.
But this Acerdo guy, all he did was develop the tool and put it out there.
He never used it to attack anyone, but he was arrested and sentenced to jail just for developing the tool.
What the court proved was that he was knowingly giving it to criminals to commit crimes.
Or let's look at Marcus Hutchins.
He developed malware, which became known as Kronos, but he only developed it.
He never launched it on anyone.
But it was because he was giving it to someone who did use it to go and attack banks is why Marcus was arrested by the FBI.
In both of these cases, what it came down to was whether or not the software maker was knowingly giving these hacking tools to someone who had intent on breaking the law with it.
But HD claims he has no responsibility with what people do with his tool.
I don't know.
If you bake a bunch of cookies and put them on a sheet in the street and say, free cookies, like, are you responsible if a criminal eats a cookie?
I don't know.
Like, I feel like it's different.
It's open source.
It's community-based.
It's an open domain.
Everyone's on the same playing field.
I feel like it's one of those things where if you're only providing those exploits or those weapons to someone in the criminal community and charging for them, that's one thing.
But if you're creating a project for the purpose of helping everyone else understand how things work and to test their own systems, and a bad actor happens to pick it up and use it too, that seems like something very different.
But I get worried for HD because he takes Metasploit to hacker conferences and hacker meetups to demo it and teach it to other people there.
And everyone knows there are criminals who attend these things.
I mean, just sharing it with the hacker chat rooms that he was part of.
Like, Frack, how could he have gone all this time without once seeing that the person that he just taught this to or gave it to was a known criminal?
Did you have any lawyers helping you on this project?
No, once in a while, I'd have to reach out for help, but it usually wasn't from a lawyer that
hired myself, usually just people I knew that happened to be lawyers who give me advice on stuff.
But that's why I'm asking about a lawyer is whether or not you had some sort of fine line on what the point of Metasploit was and maybe some of the language involved with the terms of use.
Like maybe there was something there that said, you cannot use this for criminal behavior or something.
Where was this to keep you out of trouble?
What did you do to stay out of trouble in this sense?
I mean, I think early on the solution was my spouse had a get out of jail fund, had a like lawyer fund sitting aside.
So if I got dragged off in the middle of the night, she had cash that was not tied to my personal accounts or shared accounts to find a lawyer and give me bail money basically so that was the case for about six seven years where i was pretty concerned about getting arrested for almost anything i was working at the time because it was all pretty close to the line whether it's internet scanning whether it's the metasploid stuff um you know it really comes down to whether so you think you know a prosecutor is going to make a case whether you think they they think they can make a case like prosecutors don't want to lose a case so they're not going to bring um a charge against you unless they're very certain that they're going to win um that's why the conviction rates were so high.
So it's one of those things where intent matters, but what really matters is whether the prosecutor really wants to go after you or not.
And if you convince them that, you know, hey, I'm not actually a bad actor and I'm not doing this stuff and I'm not driving this economic activity that's related to criminals, then that's helpful.
But that's one of the things I really don't like about U.S.
law.
It's, you know, the CFAA doesn't care about intent, for example.
There's nothing about our Computer Fraud and Abuse Act that cares whether you're doing it for good or not.
And a lot of our laws are problematic like that.
It isn't just like the standard section that's quoted.
It's also section like 1120.
There's a couple other parts of the U.S.
Criminal Code that are just really dangerous when they're taken out of context or
used to make a case for something that really shouldn't have been prosecuted in the first place.
So, unfortunately, like a lot of U.S.
prosecutions really just come down to
whether someone wants to go after you or not.
And all you can do is, you know, do your best to stay above the law when you can.
And when the law is really vague, do your best to not be attempting target
yeah but i'm i am surprised that when when i load up when i load up some software or even look at some you know how-tos and videos on how to hack there is a um
a disclaimer at the beginning do not use this for illegitimate purposes don't do not break the law with this information and when i when i load metasploit it doesn't say for pen testing only, only use on systems you have permission to.
And I'm wondering, why would you keep that off there?
i don't think it ever occurred to us out of warning honestly like we figured if you're downloading medicable you know what you're getting into you know you're downloading a security tool to do security testing and we're not there to tell you you shouldn't you know um
jaywalk or you shouldn't you know firebomb your neighbor's house like we assume people have reasonable reasons why they're using the software in the first place and we don't feel like we're enticing them to commit a crime because we're providing them a tool
got it however in the real world you might be pressured because law enforcement says, look, man, we keep finding criminals that are using your tool.
You need to do something more.
You need to put a terms of use up.
A lawyer might have, like, you might have had to get a lawyer to say, hey, what do we need to do so that we don't get in trouble?
And I'm surprised none of that just hit you in the face.
Like the law, like, so black hats are mad at you, vendors are mad at you, but the law wasn't mad at you.
I'm surprised.
I mean, stuff came up for sure, but mostly was able to talk my way out of it one way or another.
I think a lot of it is just the way to win in that space and to not go to jail was just to be as loud and as blatant and as above board as you possibly can.
So, you know, doing a Metasploit talk at every conference, having you know, tens of thousands of Metasploit users early on, having 200 different developers involved with the project.
The bigger, the wider, the more noisy we could make the project, the less likely someone was going to say, This is the tool for just criminals.
We're going to go after it.
You just have such a
surprising, like
an adventurous life.
There's a big difference between your typical pen tester and HD Moore.
The typical pen tester today learns how to use Metasploit, which is the tool that HD created.
And HD is the one learning how the exploits work, writing the shellcode to make them work, and actively trying to find new exploits all the time.
On top of that, he's fielding a non-stop barrage of attacks himself from creating the tool, so he's well versed at defending and attacking systems.
The experience he has in this space is almost unparalleled.
But it was because of how much passion he has about security that got him to this point.
And I just want to say to any up-and-coming pen testers out there, getting your hands on working exploits and contributing to open source projects is a fantastic way to become fluent in this field.
There are a ton of open source hacker tools out there on GitHub, and it's a great experience to download the source code and see how they work and try to improve upon them.
And even if you're just a beginner, there's probably something you can do to help, whether it's writing better documentation or improving the help menu.
Being part of a project like that can launch your career.
And HD even helped many of his contributors get jobs.
Learning to find and develop exploits would really pay off for HD, but it was a tough ride for him to hold on to.
Yeah, I think it took about three or four years before we really turned the point from that's stupid and that's crappy to that's a script goody tool to
that's a piece of crap and i don't like it to okay fine we'll use it to you know hey now everyone's using it metasploit grew up to be one of the de facto tools used by security professionals all over eventually schools started teaching students how to use it and i mean can you imagine a hacking tool becoming part of the course curriculum in school But even more than that, it became necessary to know how to use Metasploit to pass certain exams and get certified in security.
Despite the hard start and hate it received, Metasploit grew to become an invaluable tool for the pen test community to use, and it became mass adopted by security teams everywhere.
By 2008, both Scape and Spoonam had moved on to other things.
Scape's company got acquired by Microsoft and he went and worked there.
And that was ended his contributions to Metasploit.
Spoonam went to school and kind of disappeared doing his thing for a while.
And so it was kind of just me running the project again by 2008.
And I've been working working with a guy named Egypt for a long time, contributing exploits to the project and chatting about stuff.
And I invited him to kind of be one of the core members.
He joined the team and
we started working towards the 3.0 release, I believe, at the time.
And during all that stuff, you know, as it gets closer to 2009, I was working another startup, not particularly happy with life.
You know, I was pretty broke.
I mean, the startup wasn't paying me that much.
I had a bunch of credit card debt,
you know, had a pretty hefty mortgage on the house.
Was, you know, doing Metasploit training at the conferences to kind of pay the bills and keep things going.
But I was also working all day for a startup and all night on Metasploit and every weekend, every night for years straight at that point.
Super stressed out, had a baby on the way.
And when I was basically gone for paternal leave, I got an offer to acquire Metasploit by Rapid7.
Whoa, an offer to acquire Metasploit by the company Rapid7?
That's amazing.
At the time, Rapid7's product was a vulnerability scanner.
And the typical pen test scenario is to start by running a vulnerability scanner, then use Metasploit to try to get into the vulnerable systems you found.
It's a beautiful combination of tools, so it made sense for why Rapid7 would want to acquire the tool.
But Metasploit was open source and not a product that made any money.
So HD was a bit skeptical to give his tool to a corporation.
But they asked him at the right time because he was all stressed out, low on cash, and about to have his first kid.
He sort of needed a big break.
So, you know, when the offer came in to do something different, it was definitely tempting and spent quite a lot of time chatting with Rap 17, getting a sense of what it looked like.
And eventually he said, okay, let's give it a try.
Yeah, did you give him a heads up?
Like, hold on a second.
If you take the responsibility for this, you're going to be taking some bullets.
Just so you know, this is kind of the heat I'm getting here.
And somebody might call up to try to get you fired.
Yeah, put it this way.
Like, they brought me on to run the Metasploit team and to build the product line, but they also brought me on as their head of security at the same time.
So I got to take most of those bullets in the first few years.
Metasploit had a pretty strong following, but only about 33,000 active users at the time or something like that, based on our download logs.
So it was a really good opportunity to, you know,
commercialize an open source tool, but keep it open source.
And then all the commercialization really happened by building a pro version of the tool and selling that instead.
So our team was able to, you know, basically built a new office here in Austin, hired the team, got the first commercial product out the the door in about six or seven months.
And I think our team was paying our own bills within 12 months by selling our pro version of the product.
So it ended up working out pretty well.
We, you know, even now there's a whole team at Rapid 7 working on Metasploit full-time.
And it wasn't just the development side.
They also were an amazing corporate shield for all the drama I was dealing with, all the law enforcement inquiries, all the random threats, all the other stuff.
They stood up and took it.
They hired lawyers on my behalf.
They hired lobbyists on my behalf.
They did everything they could to make sure that Metasploit and exploit development and vulnerability research could stay a thing that you could count on, that you could rely on.
And they did their best to protect it on the legal front.
So, you know, outside of all the commercial terms and product stuff and all that, I give them a lot of credit for helping
vulnerability research and exploit disclosure and exploit sharing be what it is today.
Yeah, so you said lobbyists.
Why would they hire lobbyists?
Well, a lot of making sure that vulnerability research and disclosure and all that stuff stays legal is educating people.
It's like saying, hey, this is like a real legitimate reason why people need access to information.
This is why you don't want to regulate vulnerability disclosure.
This is why you don't want to create a law making exploit disclosure illegal.
I mean, on the face of it, if someone says, hey, we're going to prevent people from sharing tools that allow people to attack each other, it's like, yeah, that sounds like a good thing.
You don't want people sharing evil tools with each other, right?
Make that illegal.
It isn't so you dig in a little bit deeper and realize that you really don't want to criminalize that because that's how your defenders are learning.
That's how your actual defenders are testing their own systems.
And if you don't have those tools available internally, you have no idea how effective any of your defenses are.
And it was just one of those things where,
at a very surface level, it was hard to defend.
But once you started educating people about what the benefits were, and once you got kind of more people to be aware of what you take away by criminalizing this type of work, then you start to build that support.
So lobbyist efforts at RAP7 were instrumental in not only excluding Metasploit framework from the Wassnar agreement, at least the way the U.S.
interpreted it,
but protecting vulnerability research in general.
Yeah, can you explain the Wastenar Agreement?
Oh, sure, then I don't, it's been a while, so I'm probably going to get the details wrong, but the Wastenar Agreement was an international arms treaty by a bunch of countries saying, here's the things that we will or will not export to other countries without having approvals and things like that.
And amendment, I think either an amendment to it or an interpretation of the agreement started to classify cybersecurity tools as weapons at one point.
And the goal there was to prevent, you know, kind of NSO group style attacks, right?
Where you're shipping a toolkit, a software toolkit or a hardware toolkit that's designed to break other people's machines.
And it's really designed for like the most nefarious, you know, either surveillance use case or for actual cyber war type use cases.
However,
the language caught up a lot of other unrelated tools.
All the tools that are used for professional security testing would you know, if you squint at them right, would also be classified as weapons by the our munitions by the Wassnar agreement.
And, you know, the company's Rapid7 spent a lot of time working with lobbyists trying to help folks understand the difference between an open source tool like Metasploit and something that's more, you know, targeted and malicious, you know, and weaponized.
The thing that I don't understand about the Rapid7 acquisition is
how do you buy a free open source tool?
Like, why didn't they just fork it and rename it?
Well, someone tried that actually.
It didn't go very well.
Actually, a few people did.
Prior to Metasploit 3 coming out, when we rewrote the whole thing in Ruby, Metasploit was written in Perl.
And there was a company called Saint that released a product called Saint Exploit, which was also written in Perl.
And we're like, ah, that's suspicious.
At some point, someone shared a copy of the Saint Exploit with us.
We're like, you know what?
Half this channel code is ours.
And half these exploits really look really like the code that we wrote.
And there were a lot of similarities between the Saint Exploit product and Metasploit framework too.
So we got a little bit mad about it.
We're like, this is kind of bullshit.
Like, we feel like if you're going to use our code, that's great, but like, collaborate.
You know, don't pretend it's yours.
Don't like, don't say, hey, I made this.
Like, no, no, this is open source.
Contribute to it.
Share it.
So we changed it.
We literally changed the commercial life, the license of Metasploit to be a commercial-only license briefly for about a year or so.
Between the
2.0 Perl rewrite into 3.0, the brand new 3.0 code was under a non-open source license briefly just because of how we felt about Saint and Saint exploit.
Finally, when Egypt got joined the project and we're looking, you know, prior to the Rapid 7 commercialization or Rapid7 acquisition, we ended up changing the license back to BSD because we felt like that was the right thing to do to really grow the project.
But there definitely was like a knee-jerk reaction to close the license after that.
So Metasploit continued to be open source and free under Rapid 7, with HD and a guy named Egypt coming on board and working hard on making it even better.
And one thing that was a never-ending job was getting more exploits into the tool.
When I was working at Rapid7, every time a patch Tuesday came out, our very first thing was how do we get exploits out as fast as possible for everything that was covered?
And how do we figure out what they are?
You know, it's a lot of work, though.
Like taking a binary patch and trying to figure out the bug can take a week or two just on its own.
And that just gets you the bug.
That doesn't get you the exploit.
Getting the exploit to work, getting it triggered, getting it reliable, figuring out how to manage the memory correctly, figuring out the payload, threading problems with payloads.
I mean, there's a ton of work that goes into it.
I think one of the things that one of the reasons why I probably don't work on exploits as much anymore, is
they've gotten a lot more complicated.
Like you need a much deeper set of skills to be able to work on, you know, fiddly heap exploits.
You need to basically have this huge background or knowledge, just to be able to get the heap in the right state to be able to exploit in the first place.
And that's, you know, I'm not really that great of a programmer.
I'm not really that great of an exploit developer.
I just spent a lot of time on stuff.
So I feel like that was well beyond my ability to keep up at that point.
So I really love logic flaws.
I really love the old school, like, you know, stack overflows and SEH overflows and things like that.
But I feel like
modern exploits, especially on like hardened platforms like mobile, holy cow, there's a lot of effort that has to go into it just to get one working exploit.
No, I'm scared that you say that because a second ago, I was calling you like the patron saint of exploit development and penetration testing.
And now you're like, oh, it's too complicated for me at this point.
Good luck, whoever's doing it now.
Who can do it now if it's beyond your skill?
I mean, it's got to be super specialized.
I mean, if you look at some of the Project Zero posts,
I don't want to miss particular names, in fear of getting them wrong, but there's some amazing folks out there.
And where you see really good exploits being written is when someone has spent months and maybe years looking into the software stack around that before the exploit's worked on.
When you're looking into like how iOS parses messages or how the heap of this particular OS or the Linux kernel is being groomed in a particular way, you need to build up this like super deep, super specialized knowledge to be able to even start working on exploits in that particular space.
It's not like before where once you know how to exploit one platform, one OS, the rest is all pretty straightforward.
It used to be like, okay, I know how to exploit Spark.
I can exploit most other MIPS, a little bit of work here and there.
Now like every OS is so different, so deep and so complicated these days that you really have to specialize.
Yeah, but I feel like
you really enjoy playing in the dark.
And I mean, like, you want to be outside the known.
world of knowledge.
Okay, so there's a circle that this is the stuff we know in the world.
I'm going outside that circle and I'm going to discover things that the world does not know and bring it into the world of known.
And that is a very difficult place to be in.
That's a scary place.
You don't know where to go, which direction to go, where to point your finger.
You're hitting your face on the wall over and over and over.
And that's the difficulty of finding vulnerabilities and zero days and this kind of thing.
Even if you know that there's a vulnerability right there, it still can be hard to find that.
That's probably super like, especially with like patch reversing, you're so frustrated because you know it's there.
You know, it's patched.
You know it's in front of you.
You You know it's like probably one line away from where you're looking.
You can't see it.
So these days I spend my time on like network protocols and fingerprinting techniques and that type of research where you're going really deep down the protocol stack, looking for behavioral differences and how a network respond how a device responds to the network.
And it's a similar challenge.
You have to go find these really fiddly, really hard to find things and then extrapolate all this value from it, saying, okay, now that I know that it responds this way and this responds that way, it must be an iOS device with this particular kernel version or this particular update applied to it.
So I love doing that type of work because it is
working in the dark, like you mentioned, but it's nowhere near as complicated as doing modern heap exploits.
I find this particular skill to be one of the most important skills when dealing with technology, which is being comfortable doing things in the dark in areas that you have no knowledge of or visibility into.
Because when working in IT, you are constantly faced with new challenges or problems that you have no idea how to solve.
The problem might even be so weird that you don't even know what to Google.
And so, being able to venture out into unknown territories, even if it's just unknown to you, you've got to learn to be comfortable in these dark areas.
It's scary and frustrating to try things that you know you're going to fail at and even look stupid doing.
But the more comfortable you get in that space of working with the world of unknowns,
the better you'll be next time you face the darkness, which is like all the time.
are you still at rapid seven oh no no i started uh uh my own company about uh three and a half years ago doing network discovery stuff so um rumble we help companies find every single thing possibly connected to their network environment or their cloud uh yeah explain more give it get a good get a good pitch for it
sure thing so like i mean i spent like 27 years now doing pen testing and security work and building products and the very first thing you do whether it's a pen test and you're trying to break into someone's network or you're building a product that does something on the network like a bone scanner or a pen test tool, is you got to figure it's out there.
You got to scan the network, you got to find targets, assets, IP addresses, things.
So we came up with a really cool scan engine that can tell you amazing stuff about everything on the network really quickly.
And at this point, you know, the product, Rumble, Network Discovery, you can now find all your networks.
So starting with zero knowledge about your environment, it'll do a sampling sweep across every possible routable private IP in your organization.
It'll find every populated subnet, every single device, classify every device, tell you what hardware it's running on, and identify things like multi-home systems that are bridging different networks.
And it does it all unauthenticated quickly with like really no interaction and no real network impact.
What I find fascinating about HD is the struggle that he went through to make Metasploit.
I mean, the sheer skill it takes just to write exploits and payloads is already impressive and he had to continually write new exploits as new stuff came out.
But the resolve and determination to face a constant barrage of attacks for publishing exploits and to continue publishing more is incredible.
I think I would have given in and gave up working on it if vendors are calling my boss asking them to fire me or if law enforcement keeps bugging me but not HD.
He persisted through it all because he had a vision and a belief that what he was doing was right and the whole world was wrong.
And I think it turned out in his favor.
I think he was right and the world was wrong because we saw the world slowly change and eventually agree with HD.
Microsoft drastically changed how they handle bugs now, and their security is much better than it was before.
Google puts a similar kind of pressure on companies that HD does, saying, You better fix this vulnerability we found, or we're going to tell the world.
And when stuff doesn't get fixed, they do publish it.
And for governments, changing the way they view open source tools.
What a wild ride it's been to get some decent hacker tools out there for everyone to use.
A big thank you to H.D.
Moore, a true legend in this security space.
You can learn more about what he's working on now by visiting rumble.run.
This show is made by me, the knob sledding, Jack Reese Sider, and editing help this episode by the Zero Trust Damien.
This episode was assembled by Tristan Ledger and mixed by Proximity Sound.
Our theme music is by the encoded Breakmaster Cylinder.
Hey, HD, one last question for you.
Yeah.
When you're reviewing someone's code, can you tell me what bad code looks like?
No comment.
This is Darknet Diaries.