Andrew "weev" Auernheimer is perhaps the most infamous Internet troll around. His lifestyle of extreme was profiled by the New York Times Magazine back in 2008. Some of Auernheimer's antics have been purposely outrageous, and some allege they've been particularly malicious and misogynistic

But he became an unlikely poster child for reforming the Computer Fraud and Abuse Act -- the statute used against the late activist and programmer Aaron Swartz -- when Auernheimer was convicted of  conspiracy and identity theft under the 1986 law for sharing customer information he says AT&T left out in the open. Auernheimer and an associate allegedly wrote a script that collected the information of about 120,000 iPad 3G customers that AT&T stored on a public-facing server and then sent the data to Gawker, which published some of it in redacted form. 

Auernheimer and his legal counsel argued that he had done nothing wrong because the information was not secured, and many industry insiders worried that Auernheimer's case would have a chilling affect on information security researchers. The government, meanwhile, argued that Auernheimer had violated the privacy of thousands of people. The Justice Department won, with Auernheimer being sentenced to 41 months in prison. 

But last month, after Auernheimer had served a year, the conviction was vacated on the grounds that the case had been pursued in the wrong jurisdiction, thus leaving questions about how far the CFAA can be stretched unanswered. 

Now, Auernheimer is working on a new project -- a hedge fund called TRO LLC that will attempt to short the stocks of companies with security vulnerabilities. Essentially, the group would work with security researchers to identify companies with security problems and then bet against them in the stock market before starting to publicize their problems. 

I ran into Auernheimer while covering the Ridenhour Prizes last week, and we talked about his newfound freedom, CFAA reform and his next project. This interview has been lightly edited for length and clarity. 

Andrea Peterson: So, I should probably start with the obligatory: How are you enjoying freedom?

Andrew "weev" Auernheimer: Well, it's a modicum of freedom that we have in America outside a prison wall. I'm pretty stoic about it. I feel like I could be re-indicted at any moment since the case was dismissed without prejudice, so I have to share my wisdom and hope I find a blanket to sleep on. So I feel alright.

Do you expect there to be new charges?

I don't know. I'm certainly moving forward as if there weren't, and I can't speculate on what the government is going to do. I've already lost a lot of time, and I can't live my life as if the hammer is going to come down... I don't know. I don't really expect new charges, but I'm willing to fight them if they bring it again. I will place my body on the altar of liberty 10 more times if it will help overturn the CFAA.

But I think they realize now A) what a ridiculous case this is and B) they brought it when they thought I had nothing and nobody on my side -- and now clearly I have stellar counsel and a lot of legal muscle behind me. Clearly, I'm not an easy target and will fight until I have nothing left.

Are you planning on becoming more involved in CFAA reform?  

I think my best position as an activist is going to be as an economic activist.

What do you think about prisoners' access to online communication tools? 

There were penalties for anybody who somehow published something anywhere...

I saw your tweets.

Yeah, there's a lot penalties for people who try to maintain a public voice while they are in in prison. I think prisoners should definitely have access to Web publishing platforms. No doubt about it. It's blatantly unconstitutional that they don't.

So that's your plan now -- I know you've been talking about a hedge fund...

I want to create a means by which security researchers will be able to monetize and make a living off of their abilities without sharing information with the government -- so that's really my path forward. When you craft a software exploit or find an issue in a company's Web application, there are generally two paths you can take to make a living off of it: Either work with criminal organizations or work with governments on backdooring stuff and using it to illegally spy on people and violating their privacy. There's a third way.

The path I'm trying to pursue is to create a third way, where members of the community can cooperate with me and still make a living. We can make a reasonable living without doing something which is amoral.

What do you think about criticism that by setting it up in such a way -- to short companies -- might also be immoral?

Absolutely not.  Companies that repeatedly get punished on information security issues get very good at facing those issues. That sends a message to shareholders that if they don't make demands of their boards and corporate governance, then they're going to get hit in the wallet. This is going to happen a significant number of times. And once everybody goes, "Hey, this is a real issue -- it's going to cut into your revenue at one point or another, we need to worry about it as shareholders and we can't just check the PCI compliance boxes and say we're done as far as security and privacy." When shareholders  make demands of companies they hold shares in, then we'll see headway on privacy issues.

So this is a free-market solution?

Exactly. This is participation not only in the financial markets but in the marketplace of ideas.

Are you only going to go after companies that have the financial resources to respond to some kind of incident?

I can only short publicly traded companies, right? That right there is making it so anybody I'm going after is probably of a decent size -- if they are on Nasdaq. I'm not going after mom and pop here -- there's no money in it.

How far into the setting-up of the actual structure of this are you? How broad is your network of people who have said they want to participate? 

I've got contacts and tons of funds. People want to give me money right now, and I'm like, "Whoa, slow down a little." We have serious code to write, as far as a cryptographic infrastructure; we're doing some stuff that means we're not going to be ready until August. You know, people want to give me money right now, and I'd love to take their money, but I'm not ready yet.

What do you see as the long-term end goal? 

The long-term end goal is to give people who develop software exploits a means to make a living that is not cooperating with governments to spy on people and is not working with criminal organizations. It's to reward them for bringing that information to the public.

And you don't see it as any way creating an incentive to not disclose issues to companies? 

There's already an incentive to do the wrong thing, huge financial incentives. The only possible reasonable financial incentive to do the right thing is to tell people what's already happening right now, and people keep going quiet.

So, how is it better for them to tell you than, say, the companies with the problems?

I've dealt with this before, obviously. The company has no incentive to inform the consumers. There are a couple states with very specific, narrow laws about what kinds of breaches much be disclosed to the consumer. But let's say your e-mail account was compromised and someone has all of your e-mails: There's not a law saying your e-mail provider has to tell you. So there's almost no incentive on behalf of the companies to inform consumers.

There's no winning if you go to the companies. And, furthermore, what moral obligation do I have to someone who is doing wrong, hasn't confronted it and wants to silence it? I have no moral obligation to the company. I have a moral obligation to the public, to the general public.

But could you see how it might appear that you would be profiting while harm inevitably comes to the consumer? 

I don't think so. If you look right now, the bad process of disclosure is hurting consumers very egregiously. Right now, disclosure works like this: If there's full disclosure of a software exploit, someone just releases a working exploit code. That's bad because then criminals can use the exploit code, right? If you can talk to the effects, and what's actually been compromised without giving people the weapons, then consumer safety will be much better served. There's no doubt about it.