0:00
/
0:00

The Little Tech Agenda for AI

Who’s speaking up for AI startups in Washington, D.C.?

Matt Perault (Head of AI Policy, a16z) and Colin McCune (Head of Government Affairs, a16z) unpack the “Little Tech Agenda” and latest in AI policy—why AI rules should regulate harmful use, not model development; how to keep open source open; the roles of the federal government vs states in regulating AI; and how the U.S. can compete globally without shutting out new founders.

Timecodes

00:00:40 — Introducing the Little Tech Agenda

00:03:23 — Pillars of the Agenda and the small builder vs. trillion-dollar company gap

00:05:00 — Competition, markets, and smart regulation

00:08:59 — “Regulate use, not development” and its misinterpretation

00:10:16 — The AI policy arc since 2023: hearings, fear narratives, and executive actions

00:18:05 — Licensing regimes and open source bans compared to nuclear policy

00:39:39 — China, export controls, and open source in a global race

00:47:21 — Federal vs. state lanes, moratorium fallout, and what’s next

Transcript

00:00:40 — Introducing the Little Tech Agenda

Erik Torenberg 00:00:40

Collin, Matt, welcome to the podcast.

Matt Perault 00:00:42

Thanks so much.

Collin McCune 00:00:43

Thanks for having us.

Erik Torenberg 00:00:44

So there's a lot we want to get into around AI policy, but first I want us to take a step back and reflect a little bit. We had publicly announced the little tech agenda July of last year. There's a lot that's happened since when we first take a step back, Collin, and talk about what is the little tech agenda and how did it come to be at the firm?

Collin McCune 00:01:01

Yeah. I mean, look, a ton of credit to Marc and Ben for having sort of the vision on this. I think when, you know, certainly when I first started here. I arrived. We started advocating on behalf of technology interest, technology policy, and I think what we realized was there have been these big institutional players that have been in DC in the state capitals for a very long time.

Some of them have done a lot of really good work on behalf of the entire tech community. But there wasn't anyone specific who was actually advocating on behalf of what I think, what we call little tech, which I think in my mind, are the startups and entrepreneurs, the, the smaller builders in the space.

And I think beyond that, what we realized was. They're not always a hundred percent aligned with what's going on with the big tech folks. And that's not necessarily always a bad thing or a good thing, but, I think that was the whole impetus of this, you know, how are we going to think about positioning ourselves in DC and the state capitals in terms of our advocacy on these issues, and how do we differentiate in between sort of.

The big tech folks who come with, you know, certain, there are certain degrees of baggage

Erik Torenberg 00:02:10

On the left and the right.

Collin McCune 00:02:11

And the smallest is small from the left and the right. Right. Yeah. And, and the smallest is small. So that, that was really sort of the basic impetus of this.

Matt Perault 00:02:17

For me, it was actually sort of almost a recruiting vehicle.

So I, when, when it hit in July, I was not yet at the firm. I started in November and when, when I first read the agenda. It sort of transformed the way that I looked at the rooms that I would sit in, where there would be policy conversations where all of a sudden you could see essentially an empty seat and, and little tech's not there.

You know, there's a, there would be conversations where people would say, um, and in this proposal we want to add this disclosure requirement, and then we'll have companies do a little bit more and a little bit more. And when you've read the little tech agenda, all of a sudden you start thinking, how is this gonna work for all the people who aren't in the room?

And so for me, the question like thinking about coming into this role in the firm was. Is this a voice, is this a part of the community I want to advocate for and think about? And when you start looking at the policy debate from a perspective of little tech, and you see how many of the conversations don't include a little tech perspective. It comes, from my point of view, it was very compelling, to think about how can I advocate for this part of the internet ecosystem?

Erik Torenberg 00:03:14

And Collin, why don't you outline some of the pillars of the little tech agenda or, or some of the things that we focus the most on? And maybe how it differentiates from sort of big tech more, more broadly.

00:03:23 — Pillars of the Agenda and the small builder vs. trillion-dollar company gap

Collin McCune 00:03:23

Yeah. I mean, just from a firm perspective, right? Obviously we're verticalized, we, you know, we all live and breathe this, and I think that's been very, very competitive for us on the business side. But I also think it's very competitive for us on the policy side too. Right? Obviously Matt leads our AI vertical and that sort of our AI policy lead.

We have a huge crypto effort. We have a major effort around American dynamism, and it's, and this is sort of defense procurement reform, which is something that the United States has needed forever and ever. We have, you know, other colleagues who work on the bio and health theme, and they're fighting on behalf of FDA reform, everything from PBMs. There's a whole, whole vertical there that they're working on. We're working a lot on FinTech related issues. And then, you know, just like classic tech related sort of internet entrepreneurs coming up, what, what does that relate to? There's a lot of tax issues that come along with it, and then of course, obviously there are the venture specific things that we have to deal with.

But look, I think I try and think about this from a basic point of view, which is just like. If you are a small builder, what are the things that should differentiate you between someone who's a trillion dollar company and you have hundreds of thousands of employees, right? If you're five people and you're in a garage.

How are you supposed to be able to comply with the same things that are built for a thousand person compliance teams? Like it's just not the same thing. Right. And and I like there are categories and categories and categories that, you know, Matt and I are dealing with on a regular basis, but that's probably the main pillar, which is five person versus trillion dollar company.

Not the same thing.

00:05:00 — Competition, markets, and smart regulation

Matt Perault 00:05:00

It's made my job actually really hard in certain ways since I started at the firm, because the kinds of partners that you want within our portfolio often don't exist in that a lot of the companies don't have a general counsel. Yeah. They don't have a head of policy.

They don't have a head of communications. And so the kinds of people who typically sit at companies thinking all day about like, what is this state doing in AI policy? What is this a federal agency doing in terms of rulemaking? They're not at startups that are just a couple of people and engineers trying really hard to build products.

Those companies face this incredibly daunting challenge. I mean, it seems so daunting for someone like me, like non-technical and I've never worked at a startup. If they're trying to build models that might compete with Microsoft or OpenAI, or Meta or Google, and that is unbelievably challenging in AI. You have to have data, you have to have compute. There's been a lot written about the cost of AI talent recently. It's incredibly, incredibly daunting. And so the question that Collin and I talk about all the time is. For those companies, what are the regulatory frameworks that would actually work for them as opposed to making that competition even more hard, even more difficult than it already is?

Erik Torenberg 00:06:09

One of the principles I've heard you guys, you know, hammer home is we want a market that's competitive for where startups can compete. We don't want a monopoly. We don't want even oligopolies, you know, a cartel-like system, and that that doesn't mean no regulation. Cause that can, as we've seen, that could be destabilizing too.

But it means smart regulation that, that enables that, that competition in the first place.

Matt Perault 00:06:29

So I think one of the things that was, that, that's been surprising to me to learn about venture is the time horizon that we, that we operate in. So we're our firm. Our funds are 10 year cycles. So, we're not looking to spike an AI market tomorrow, and have a good year or a good six months, or a good two years.

We're looking to create vibrant, healthy ecosystems that result in long run benefits for people and long run financial benefits for our investors and for us. And that means having a regulatory environment that facilitates healthy, good, safe products. It doesn't mean like if people have scammy, problematic experiences with AI products, if they think AI is bad for democracy, if they think it's corroding their communities, that's not gonna be, that's not in our financial incentive.

That's not good for us. And so, that really animates the kind of core component of the agenda, which is not trying to strip all regulation, but instead focusing on regulation that will also, that will actually protect people. And we think that there are ways to do that without making it harder for startups to compete.

Collin McCune 00:07:33

To Matt's good point. I actually, I walk into a lot of lawmaker offices and I, you know, it sounds like I'm pitching my book, but I genuinely say like, our interests are aligned with the United States of America's interest. Because the people that we're funding are on the cutting edge.

They're the people who are gonna build the companies that are gonna drive the jobs, they're gonna drive the national security components that we need, and they're also gonna drive the economy. And like we wanna see them build over a long time horizon. And, and like that is exactly what, how we should be building policy in the United States.

Of course, like half the offices I walk into like, all right, great, that guy, get that guy outta here.

Matt Perault 00:08:13

99.9% of people we talk to think that all we want is no regulation. And yeah, we, despite like writing extensively, like both of us writing, speaking extensively about the importance of good governance for creating the kind of markets that we want to create, and Collin can speak more to it in crypto, I've learned a lot from our crypto practice because the idea there is you really need to separate good actors from bad actors and ensure that you take account for the differences.

And it's true in AI as well. If we don't have safe AI tools. If there is absolutely no governance, it's that's not going to create a long run healthy ecosystem that's gonna be good for us and good for people throughout the country.

Collin McCune 00:08:51

I actually can't think of a single example across the portfolio in which we are arguing for zero regulation.

00:08:59 — “Regulate use, not development” and its misinterpretation

Matt Perault 00:08:59

The core component of our AI policy framework, which was developed before my time, I wish I could take credit and I can't, is focus on regulating harmful use, not on regulating development. And that sentence, regulate use, do not regulate development, somehow is interpreted as do not regulate, and people just omit for some reason, the part that we focus on, on focusing on regulating harmful use.

And that in our view is robust and expansive and leaves lots of room for policy makers to take steps that we think are actually really effective in protecting people. So regulating use means. Regulating when people violate consumer protection law, when they use AI to violate consumer protection law or when they use AI in a way that violates civil rights law at the state and federal level, or violating state or federal criminal law.

So there's an enormous amount of action there for lawmakers to seize on. And um we really want that to be like an active component of the governance agenda that we're proposing. And for some reason, it's all passed over and the focus is just on the, on don't regulate development. I don't exactly understand why that, why that ends up being the case.

Collin McCune 00:09:59

Easy headline.

Erik Torenberg 00:10:01

So there's been a lot that's happened in AI policy and I wanna get to it. But first, perhaps Matt, you can trace the, the evolution a bit over the last few years. I believe there was a time where like, pattern matching with social media regulation a bit. Why don't you trace some of the, the biggest inflection points or, or kind of the debates over the last few years and we'll get to today?

Maybe Collin.

00:10:16 — The AI policy arc since 2023: hearings, fear narratives, and executive actions

Collin McCune 00:10:16

I think we, we have to play a little bit of history. I wanna get to, you know, sort of a point that I think is the really critical point of what we're all facing here. For us, for me, I would say from a policy and government affairs perspective, this conversation started early 2023.

That, that was, that was sort of like the kickoff of the gun. It sort of puttered along and became more and more real over time. But in the fall of 2023, so almost, almost exactly to the day, two years ago. There was a series of Senate hearings in which, you know, some major CEOs from the AI space came and they testified, and I think that the message that folks heard was, one, we need and want to be regulated, which I think maintains that's truth today.

That's obviously, you know what, Matt and I are working on a, on a regular basis. But I think, included in some of that testimony was a lot of speculation about the industry that led to and sort of absolutely jumpstarted this whole huge wave of conversation around the rise of Terminator. You know, go hug your families because we're gonna all be dead in five years.

That spooked Capitol Hill. I mean, they absolutely freaked out about it. And look, rightfully so, you have, you have these really important, powerful people who are building this really important, powerful thing. And they're coming in, they're gonna tell you that, you know, everyone's gonna die in five years.

Right? That's a scary thing for people to hear. And oh, by the way, we want to be regulated. Which, you know, look, that starting gun, I think moved us in hyper speed. Into this conversation around how do we lock this down? How do we regulate it very, very, very quickly. I think that led to the Biden executive order, which we, you know, we have publicly sort of, you know, denounced in certain categories.

That executive order led to a lot of the conversation that I think we're having in the states. A lot of the, you know, sort of bad bills that we've seen come through the states. I think it also, led to a number of federal proposals that we've seen that have not been very well thought through also. And look, you know, I think people are kind of sitting around, they're like, oh, well, you know, was it just like, you know, some testimony from the CEOs that did this? And the answer to that is no. You know, from my, from my point of view and look, you know, they, they deserve a lot of credit. I think the effect of altruist community for 10 years backed by large sums of money were very, very effective at influencing think tanks and nonprofit organizations in DC and the state capitals to sort of push us in a direction where people are very fearful about the technology, and that has, that has shaped, significantly shaped the conversation that we're having throughout DC and the state capitals and candidly, on a global stage.

You know, the EU act and, the EU AI Act, we're, we're public on that. There's a lot of very, very problematic and provisions in there. All of this banner of safetyism came from this ten year head start that these guys have had. So when I always, you know, that that's kind of a bit of the history, but sort of as an aside to this, I always just have to smirk or, you know, smile to try and laugh it off.

But I mean, when people are writing these articles about the fact that the AI industry is, you know, pumping all this money into the system, certainly like there, I'm not suggesting that there's not money in the system. We're obviously active on the political and policy side. We're, you know, we're not hiding that.

But it is dwarfed by the amount of money that is being spent and has been spent over a 10 year window. And candidly, I mean, the reason that Matt and I have jobs is because we are playing catch up. We are here to try and make sure that people understand what is actually going on in this conversation and be a counterforce to this, this group of people and, and this idea, this ideology that has been here for a long period of time.

So that's, I look, you know, that that's kind of the briefer on this.

Matt Perault 00:14:29

I mean, and companies. I think we're ready to consider some policy frameworks that, that I think we're probably really going to be challenging for the AI sector in the long run. Right. And I think that's because I was at Meta, then Facebook starting in 2011 and through 2019, and so after really like 2016, there was aggressive criticism of tech companies.

And the general framing is like, you're not being responsible and regulation needs to catch up. Governance of social media is behind where the products are. And whatever you think about that, that was really the kind of strong view in the ecosystem that like governance has allowed, the lack of governance has allowed problematic things to happen.

And so I think when AI was starting to accelerate and you had certain sort of prevailing political interests, I think that were driving the conversation, companies rushed to the table and I think it was a group of five? Three, five, seven companies who went into the White House and negotiated voluntary commitments.

I mean, we don't even have to make the argument about the importance of representing Little Tech in when you see that there is a set of companies who negotiated an arrangement for what it would look like to build AI at the frontier with all current developers who weren't those companies and all future startups not represented at the table.

I think that is why, like we started to think about the value of having more dedicated support around AI policy, because clearly the views of little tech companies aren't represented in the conversation.

Collin McCune 00:16:02

Well, I mean, let me, let me just add one thing to this. It's Marc and Ben's story. They've told it many times. I was in the meeting as well, you know, and, and like, you know, everything they've said has been a hundred percent true and accurate. But there was a, there was a prevailing view by very, very powerful people of the previous administration that this was going to be only two or three major companies able to compete in the AI landscape and because that was the case. They needed to be basically locked down and, and put in this incredibly restrictive view from a policy and regulatory perspective. And that was gonna be kind of like this. This entity that was kind of like a arm of the government. And I think that that was the most alarming thing that I think we had heard from the administration on top of an incredibly alarming series of events that happened on the crypto side, including sort of wanting to eradicate it off the face of the planet it seemed like. So I think that that all led to kind of the position that we're in now. And certainly like Matt's hiring and the thing, you know, like us building out the team, et cetera.

Matt Perault 00:17:12

That narrative is clearly like a very alarming, maybe the most alarming version of this, but even since I've been in this role, I've heard other versions of it where people will say, "oh, don't worry about this framework. It just applies to three or five companies," or "it just applies to five to seven companies," and I think they mean that.

To provide comfort to us, like, oh, this isn't gonna cover a lot of startups. But the view of the AI market where there are only a small number of companies building at the frontier is not the, that's not the vision for the market that we have. We want it to be competitive and diverse at the frontier. And the policy ideas that we're coming out of the period that Collin's talking about, we're dramatically different from where they are today in a way that I think like some people have even like lost sight of exactly where we were a couple years ago.

There were ideas being proposed by not just the government, but industry to require a license to build frontier AI tools and for it to be regulated like nuclear energy, not just

Collin McCune 00:18:03

Which would be historic for software development.

00:18:05 — Licensing regimes and open source bans compared to nuclear policy

Matt Perault 00:18:05

Yeah. Right. Unprecedented.

And for it to be regulated like nuclear energy with like an international style nuclear like.

Sorry, an international-level nuclear-style regulatory regime to govern it. And we've moved like no matter what you think about the right level of governance, there are not a lot of people now who are saying what we need as a licensing regime where you literally apply for permission from the government to build the tool, but that wasn't that far in the rear view mirror.

Collin McCune 00:18:31

Yeah. And, and look, and we're also talking about bans on open source. Right. I'm like, we're still kicking around that idea at the state level. You know, look for, for us who live and breathe the tech stuff on a daily basis, this, this is, you know, this sounds insane, crazy. But let me, you know, like, just to make it a little bit more real, right?

Like the nuclear policy in the United States has yielded two, three new nuclear power plants in a 50 year period since these organizations have been started and look like you can, some people are pro-nuclear, some people are anti-nuclear. I don't wanna get into that debate. The point though is, is that that was not the intended policy of the United States of America.

That was the effect of putting together this agency and what has come from that. And I think, you know, look. If we do the same thing to AI, had we done the same thing in AI in that period of time, then you don't have the medical advancements, you don't have the breakthroughs, you don't have all of the things that come from this that are incredible, but beyond that.

We lose to China.

Full stop. You lose to China and then our greatest national security threat becomes the one who has the most powerful technology in the world.

Erik Torenberg 00:19:43

Right. And I think, I think the early concern on the open source was that we would be somehow giving it to China, but then we've seen with DeepSeek, et cetera, that they just have it anyways, right?

Collin McCune 00:19:51

Exactly right. Exactly. You know, the idea that we could lock this down, I think, I think, you know, I mean Marc and Ben have talked about this. I mean, I think they've debunked that a number of times.

Erik Torenberg 00:20:00

Just to understand, was, for the previous administration, what was their calculus? Was it that they were true believers in the fears?

Was it that there was some sort of political benefit to having the, the views that they had, especially on the crypto side. I don't understand what's, what is the constituency for anti-crypto stance how do you make sense of sort of the players, or the intentions or motivation, just to understand sort of the, the calculus there.

Collin McCune 00:20:22

You know, I mean, look, I, I think that that's a really, I think that's a really hard one to answer and I'm not sure I can pretend to be completely in their minds. I think there's a couple of different competing forces here. Like one is, you know, what are the constituencies that support sort of that administration, what are the constituencies that support, that side of the aisle?

And I think that. Especially over the last 10 to 15 years, it has been very, very heavy focus on consumer safety, which I think look a very important thing. And we're obviously in alignment on that. I think everyone should be in alignment, have to protect, consumers, have to be able to protect the American public.

But I think that a lot of that conversation has been weaponized. I think that it is, it is a big time moneymaker. I think a lot of these groups either get backing from very, very wealthy special interest. Or they are small dollar fundraising off of quick hits. Like, you know, AI's coming for your jobs, donate $5 and we're going to, you know, and we'll make sure that we take care of this in Washington for you.

And, you know, like pretty easy, you know, it's a pretty easy manipulation tactic, you know, it's used like from a bunch of people. But I think that that's like a very, that, that held very seriously true. Right. And I think the other thing here is that I think, personnel is policy. It's the old saying, personnel is policy.

And I think a lot of the individuals that, were in very senior decision making roles within that White House and that administration came from this sort of consumer protection background where they've seen this, that was constituency. They were put in this position to come after private enterprise.

Like, you know, that was, that was the, that was the goal. Like there's this whole idea out there. I think among some of those folks that, you know, Senator Warren has, has, you know, proposed this many times is, is like, if you're not getting, you know, if you're not going after and getting people on a regular basis in the private sector, then you're not working hard enough.

And I, and like, I, I just, you know, I think that, that, that. Is, is probably like the second thing, and like the third is just, we're at this very weird moment where being a builder and being in private enterprise is, is a bad thing to some policymakers. It's not you, you're not doing good because you're earning a profit and you know, they certainly won't say that, but the activities and the things that they're doing are, are a hundred percent aligned with that, that type of idea.

So I, you know, I, I think that's the basic crux of it.

Matt Perault 00:23:02

I think the things that motivated that approach were done in good faith. And I think, I think it's what you alluded to earlier, which was like I don't share this view, but there are a lot of people who believe that social media is poorly regulated and that because policymakers were asleep at the wheel we woke up at some point, I don't know, sometime in the 2014 to 2018 period and realized that we had technology that we thought was actually not good for our society. And I think that whether or not you think that that's true or not, that I think that was, that has been a widely held view. It's a wide, it's a held view on the right and on the left.

It's a bipartisan view. And so I think when this new technology came on the scene, this was a do-over opportunity for policymakers, right? Like, we can get this right when we didn't get the last thing right. And so I understand that motivation. It makes a lot of sense. I think the thing that we, that, that we strongly feel is the set of policy ideas that came out of that like good faith, belief were not the right policy ideas to either protect consumers or lead to a competitive AI market. Like yeah, some of, many of the politicians who are pushing were pushing concepts that would've really put a stranglehold, I think, on AI startups and would've led to more monopolization of a market that already tends toward monopoly 'cause of the high barriers to entry already.

Those politicians three years before had been talking about how problematic it was that there wasn't more competition in social media. And then all of a sudden they're behind, you know, a licensing regime, which is not. I don't think there's much economic evidence that licensing is pro-Competitive is typically is the opposite, right?

The disagreement is less with the core, feeling like we wanna protect people from harmful uses of this technology and more from the policy concepts that came out of that feeling that we think would've been disruptive in a problematic way to the future of the AI market.

Erik Torenberg 00:24:51

Anecdotally, it seemed from, from afar that some of the concerns early on were almost, you know, to match social media, like around disinformation or even like DEI concerns.

And then, you know, people were trying to sort of, uh, make sure the models were compatible with, compatible with sort of the sort of, you know, um, speech regime at the time. But then it sort of shifted to, oh, wait, you, is this, is there more existential concerns around jobs? Or, or is AI even like nukes in, in the sense of like even people doing harm or AI itself doing harm?

But it, it seemed to. To escalate a bit and, you know, maybe aligned with that testimonial that you alluded to.

Matt Perault 00:25:22

I experienced it as feeling like the goal posts always move and one of the things that I said like that, I said that I started asking people when I was really trying to settle into this regulate use, not development policy positions.

What do we miss? Like if we regulate use primarily using existing law, what are the things that we miss? And I haven't gotten very many clear answers to that. Right? Like, you can't do illegal things in the universe, and you also can't use AI to do illegal things. And typically when people list out the set of things that they're most concerned about with AI, it are, they're typically things that are covered by existing law.

Not, probably, not exclusively right, but primarily. And so that at least seems like a good starting point. Some of the other issues that I think are like understandably ones that we should be concerned about have a range of different considerations associated with them. Like the, like if you're concerned about misinformation or like speech that you think might not be true or might be problematic, there are significant constraints on the government's ability to regulate that.

The First Amendment imposes pretty stringent restrictions and I think for very good reason because you don't want the government to dictate the speech preferences, policies of private speech platforms for the most part. And so, so that, those issues might be concerns, but they're not necessarily areas I think where you want the government to step in and take strong action. And so there, I think there are things that we should probably do as a society to try to address those issues, but, government regulation maybe isn't the primary one. And again, in most of the things that people are most concerned about, like real, real use of the technology for clear cognizable, real world harm, existing law typically covers it.

Collin McCune 00:26:55

I have a theory on this, so I, I think everything that Matt just said is, is spot on. But, but you know, like then, then you're kind of sitting around and you're kind of scratching your head. It's like, okay, well if use covers it and there hasn't been, you know, a, a very incredibly fair rebuttal onto why use is not enough in, in terms of focus on, on the policy and regulatory side.

What's, what's the answer? I think we're, we're experiencing sort of this, I don't know if it's phenomena, but we're experiencing this pattern on the crypto side too, which is, which is we're having a very, very spirited debate on the crypto side of things, on how to regulate sort of these tokens and how do you launch a token in the United States is it a, security is it a commodity.

And this is sort of this age old debate that's, you know, plagued securities, traditional securities laws for years, but also certainly the crypto industry. But what we have found is there are, there are a number of people who have entered this debate who are actually trying to get at the underlying securities laws.

Like they, they want to reform securities laws. They don't wanna reform crypto laws.

Erik Torenberg 00:27:59

Interesting.

Collin McCune 00:27:59

That involve securities. And this is their only venue by which they can enter that conversation because we're not having there, there's no will from the congress or from policy makers to go and overhaul the securities laws right now.

You know, it's just not there. But what is moving is crypto. So people, you know, there are all these people that are now trying to enter this debate and like, oh, well we should re-look at this. I'm like, well, this doesn't have anything to do with it. We shouldn't be entering this conversation yet. They're still pushing, right?

Yeah, and that's kind of muddying the water. I think a very similar thing is actually happening on the AI side, which is, you know. There are a number of members of Congress that feel like, well, we missed it on the 96 Telecom Act. Like, that wasn't, we didn't do good enough around then so we need to re-right the wrongs through the venue of an AI policy conversation. Right. Because if you, if you think about it, right. Assuming that use doesn't go far enough for someone. Right, and this is the same conversation that we're having in California right now or in Colorado right now. If uses does not go far enough. Okay. Well then it would be really, really simple if you could have a privacy conversation around this.

If you could have an online content moderation conversation, an algorithmic bias conversation around that, you could do all of that. Wedge it through AI and then assuming AI is actually going to be the thing that we all think it's gonna be. Now you've put basically a regulatory funnel on the other side.

Like you've put a mesh screen where everything has to run through AI and therefore it runs through this regulatory proposal you put together.

Matt Perault 00:29:31

The thing that I've really been wrestling with in the last few weeks is whether those kinds of regimes are actually helpful in addressing the harm that they purport to want address.

And Colorado is a really good example. So there are all these bills that have been introduced at the state level. Colorado's the only one that's passed so far that set up this, this regime where you basically have to decide are you doing a high risk use of of AI or low risk use of AI and this would be for startups.

That don't have a general counsel, don't have a head of policy. Can't hire an outside law firm to figure it out. High risk, low risk. And then if you're high risk, you have to do a bunch of stuff. Usually impact assessments. Sometimes audit your technology to try to anticipate is there going to be bias in your model in some form, which maybe an impact assessment helps you figure that out a little bit.

But it's probably not going to eliminate bias entirely. It certainly isn't going to like end racism in our society. There was a Colorado is now their, their governor, their attorney general have, have put pressure on the legislature to roll back this law because they think it's gonna be problematic for AI in Colorado.

And so there was just a special session there to consider various different alternatives. One of the alternatives that was introduced proposed codifying that the use of AI to violate Colorado's anti-discrimination, anti-discrimination statute is illegal. That's consistent with the regulate harmful use framing that we've talked about, and it's instead of having this like amorphous process where maybe you address bias in some form, maybe you don't, this goes straight at it.

It's not a bank shot. It goes straight at it. Where if someone uses AI in a way that violates anti-discrimination law, that would be. That could be prosecuted, the attorney general could enforce. And I don't, I still don't understand why that approach is not, is somehow less compelling than this complex administrative paperwork approach.

I think it's kind of the reason that Collin's describing, which is like, people want another, a different bite at the apple of bias, I suppose, but it's not clear to me that that, that it's actually the best way to effectuate the outcomes that you want, as opposed to just criminalizing or creating civil penalties for the harm that you can see clearly.

Collin McCune 00:31:32

It's also, I, I mean in, in policymaking and bill writing, it, it, it's really, really easy to come up with bad ideas. Yeah, it's easy, right? Because they're not well thought through. The first thing comes to your head, someone publishes a paper on something. Here we go. It takes real hard work to get something that actually works and then it's even harder to actually go through a political and policy negotiation with a diverse set of stakeholders and actually land the plane on something.

Matt Perault 00:31:57

I think, I think that's part of the reason that people think that we are anti governance because when we've, I mean, Collin, again, he lived this history. I'm coming in late to it, but like as we were ramping up our policy apparatus, these were the ideas in the ecosystem licensing, nuclear style regulation, like flops, threshold based disclosures, really complicated transparency regimes, impact assessments, audits, which are a bunch of ideas that we think are not going to help protect people and are gonna make it really hard for low resource startups. And so we've been trying to say, no, no, no, don't do that.

And so that sounds like deregulate, but for whatever reason, it's been hard so far to shift toward like, here's another set of ideas that we think would be compelling in actually protecting people and creating stronger AI markets.

Erik Torenberg 00:32:43

Right now we don't see, you know, terrorists or criminals being aided, you know, 1000 x with AI in, in performing terrorism or crime.

Like when I ask people like, what are you truly scared about? Like, gimme a concrete scenario. People, you know that they'll be like, oh, what about like bioterrorism or something? Or what about, you know, cybersecurity, you know, theft something. We seem very far away from that. Is there any amount of development at, you know, in the next few years, any amount of breakthroughs where you, where you might say, oh, you know maybe use isn't enough or, or do we think that that will always be a...

Matt Perault 00:33:15

I think it's conceivable. I mean, and, and I think we've been open about that. Like we, we, we think existing law is a good place to start. It's probably not where we end. So Martin Casado, one of our general partners, wrote a great piece on marginal risk in AI, basically saying like, when there's incremental additional risk that we should look for policy to address that risk. And so the situation you're describing, I think might be that, I think what you're getting at is a really important question about just potential significant harms that we don't yet contemplate. We get asked often about our regulate use, not regulate development framework.

Are you just saying that we should address issues after they occur? And I understand why that's a concern. Like there might be future harms and wouldn't it be nice if we could prevent them in advance? But that is how our legal system is designed. And typically when you talk to people about ways that you could try to address potential criminal activity or other legal violations ex-ante before they occur. That's really scary to people like, Erik, what if we just learned a lot of information about you and then predicted the likelihood that you might do something unlawful in the future. And if we think it's exceeded a certain threshold, then we're gonna go and try and take action against you before you've done it so that we can prevent future crime that you're laughing cause it's laughable. We, we don't want a kind of ex-ante surveillance both because it feels invasive, but also because it often is ineffective. Like you might, it might, we might run some test that shows that maybe you're likely to be predisposed to some kind of criminal activity, but we don't know until you've done it that you are going to, that, that you've done it.

And so I think that kind of approach, again, I think it's motivated by a really valid concern. And a valid desire to prevent harm. What if we could prevent harm before it's occurred? The challenge is the regulatory framework, I think probably won't do that. It probably won't have the effect of preventing harm.

And there are all these costs associated with it, mainly from our perspective inhibiting startup activity.

Erik Torenberg 00:35:12

Marc once told me on a podcast, he told me the joke, which is, uh, man, man goes to the government. Uh, you know, I go to the government because I have this big problem, now I get a lot of regulation. Now I have two problems.

Okay. Let's talk about the, the state of AI policy today. There's a lot that's happened in the last few months with the, the moratorium, the, the Action Plan. What are some of the things that we're excited about right now? What are some of the things we're, we're less excited about right now? Why don't we give a breakdown of where we're at right now?

Matt Perault 00:35:40

So I think given what Collins described about where things were a couple years ago, it's great to see the federal government certainly the executive branch, but not just the executive branch. I think this is in Congress across both aisles being supportive of frameworks that we think are much better for little tech.

So trying to identify areas where regulatory burden outweighs value and where we can right size regulation to make it easier for AI startups. As Collin said, support for open source. We were in a really different place on that a couple years ago. Now it seems like there's much more consensus. And again, actually it was across the end of the last administration and the current administration around the value of open source for competition and innovation.

The, the National AI Action Plan also had great stuff in it about thinking through the balance between the federal government and state governments, which is something that we've done a lot of thinking about. There's an important role for each but we think the federal government should really lead regulation of development of AI states should police harmful conduct within their borders and I think there's stuff in the Action Plan that would try to ensure those respective roles. There's also a lot of stuff in the Action Plan. It wasn't really talked about much. It wasn't sort of the headline grabbing stuff that I thought was incredibly compelling in terms of, again, trying to to create a future for AI that just works better for more people.

And a really good example is the stuff on worker retraining that focused on different programs that could help workers if they're displaced as a result of AI, as well as monitoring AI markets and labor markets to make sure that we understand when there are significant labor disruptions. So I think it sort of gets at a point that you were alluding to a couple minutes ago about like, what happens when there's something really disruptive in the future.

Can you predict with certainty that there won't be this crazy disruptive thing and no, we can't, there, there might be significant labor disruption. Others at the firm have talked extensively about how typically there's always, there are worries about labor disruptions when there's new technology introduced.

Typically, there are increases in productivity that end up being good for labor overall. We think that's the direction of travel, but you never know. We can't predict it with certainty. And so I think it's a really strong step to try to just monitor labor markets to see what the disruption might look like so that we're set up to take strong policy action in the future.

Collin McCune 00:37:53

Can I, can I just say one thing about the AI Action Plan?

Matt Perault 00:37:56

Sure.

Collin McCune 00:37:56

And I, I don't wanna juxtapose this to what we saw under the Biden administration, which is, incredible amount of activity under the Biden administration. Incredible amount of activity under the Trump administration. But, you know, look, I kind of view these executive orders and these plans that come out from administration are very, very important.

And some of them have true policy. They direct the agencies to do things, to come out with rewards and then take under rulemakings and things like that. But from an AI Action Plan perspective. For me, it was so significant because I think it turned the conversation on its head before it was, we have to, we have to only focus on safety with a splash of innovation. And now it is. We understand how important this is from a national security perspective. We understand how important this is from an economic perspective. We need to make sure that we win. While people, while keeping people safe. Right. And that dynamic and that shift of rhetoric is incredibly important because what that does is it signals to the rest of the world, it signals to other governments that this is the position of the United States and will be the position for the next three and a half years, and this is the position of the United States to the Congress. So when the Congress is looking at potentially taking up pieces of legislation or taking actions, or even committee hearings, which, you know, for the, the broad base of what we're talking about are fairly insignificant, all of that is sort of kept in mind.

So now the conversation has shifted significantly and that that is really, really important.

Erik Torenberg 00:39:30

Speaking of, of winning Collin, I'm curious for, for our thoughts, on AI policy vis-a-vis China, whether it's export controls or any of the, you know, issues we care about.

00:39:39 — China, export controls, and open source in a global race

Collin McCune 00:39:39

Yeah, I mean, well, I mean, look, first and foremost, we've talked about it already. I mean, we have to win, right? And, and I think, I think that that is, that is at, that is at the main thrust of a lot of what we're doing here and a lot of the way that we think about this from a firm perspective. You know, I think first is making sure that the, the founders and the builders can build appropriately with appropriate safeguards and an appropriate regulatory structure.

The second is, how do we win and make sure that America is the place where AI is, is probably the most functional and foundational vis-a-vis China. You know, I think that, there has been a long conversation, the Diffusion Rule that came out from the Biden administration specifically on export controls.

Many, I think, panned that proposal. I think that that was, a lot of people suggested it was probably too restrictive. It wasn't the right way to think about things. I think, you know, we have spent most of our time, Matt, leading this effort has spent most of his time, our time specifically focused on how are we regulating the underlying models and how are we regulating hopefully the use of these models.

Versus specifically sort of on the export control piece. What I will say though is very concerning, sort of some of the proposals that came out from the Biden administration, some of the proposals that we've seen at the state level and some of the proposals that we've seen at, in, at the congressional level, federal standpoint that dealt with specifically export controls on models themselves. And we're still kind of having this conversation. There's, there's a, there is a policy set that has been kicked around for a while. It's called the outbound investment policy, which is basically how much US money from the private sector is flowing into Chinese companies.

And very noble, laudable, you know, super supportive of that concept. You know, we are a very sort of primary America, America first sort of organization here. We're investing primarily in American companies and American founders. So, you know, we, we are, we're very supportive of it. But when you, you sort of edge into the idea that we might inadvertently ban US open source models from being able to be exported across the country. Like by definition of open source, there is no, there are no walls around these types of things. So that's one of the areas that we've been very, very focused on and I think obviously very important to make sure that we don't have these very powerful technologies, US main technologies in, in the hands of our Chinese counterparts and the PLA and CCP using this against us.

But I also think that we need to make sure that we're not extending too far and limiting the power of open source technologies to be able to kind of be the platform around the world. You know, the final point that I'd make here is we do all ultimately and fundamentally have a decision to make, as you know, the US, which is, do we want people using US products across the world?

Which helps for a whole bunch of different reasons, but certainly on soft power from a national security perspective, or do we want people to use Chinese products? The more that we lock down, obviously American products, the more Chinese, the Chinese will enter those markets and sort of take a land grab in that space.

Erik Torenberg 00:43:04

Why don't you get into more what happened with the moratorium and, and the fallout that ensued.

Collin McCune 00:43:07

I think this one's is a bit complicated. There was a perception about the moratorium when it came out that it would've prohibited all state law from existing for a 10 year window. Obviously, that's a long period of time.

I'm not sure we would necessarily completely agree with that policy stance. That from our point of view is a misinterpretation for a whole bunch of different reasons of actually what the language said. But you know, sometimes in DC a lot of times in DC perception is reality. And that that kind of, that kind of took hold.

But I, I also think that there, there are also, you know, strong competing forces like we've discussed right from the I, I think the doomer crowd or the safety crowd that were very, very anti, that had had used all of their tentacles that they've spread out over the last decade to try and move in and try and kill this.

I think they also were successful in leveraging some other industries to try and come in and also move forward to try and kill this thing and look. You know, by virtue of the vehicle, the underlying procedural vehicle, this reconciliation package that it was moving in, it was a partisan exercise. It was gonna be Republicans on Democrats, and that was that, right?

And there was nothing, even a prominent AI policy that was gonna be dropped in a reconciliation package that was ever going to drag Democrat votes over it because it was such a big sort of Christmas tree style thing that had all kinds of, all kinds of tax reform positions, et cetera. And if you are in one of those situations, the margins on the votes become very, very, very small.

So all it took was, you know, one or two Republican senators hitching their wagon to some of these ideas that were out there to tank this thing, right? And look. I think that's gonna be a situation that you're gonna fight in any sort of political policy, legislative outcome, or any sort of, any, any sort of issue that you're gonna be running within the Congress.

Right. But I think more so than anything. Um, and we heard this repeatedly from a whole bunch of different people. And this is what we've also experienced. The industry was just not organized well enough, right? And that's not just the industry. It's also the people who care about this thing that aren't actually industry stakeholders.

The stakeholders who were pro some level of moratorium or some level of preemption were just not organized. And I think that that, that was a, you know, both a eye-opening moment, but also an important moment because I think what we have done in the proceeding, you know, three, four months since this thing has gone down is we've taken a long, hard look at what we need to do collectively from a coalition to be able to be in a better position next time we're there. And so what does that look like, right? I mean, first and foremost, it comes with writing, doing podcasts, talking about these things, talking about the details of what's actually in these proposals and what it actually means for states and, and the federal government to make sure that we're fighting through the FUD that's coming through, because it's always gonna be there. There's misrepresent misrepresentation all over the, all over the field. The second piece is let's all get on the same page, which I think we've, we've worked very hard to do and, and where we can find alignment, we, I think we've found that alignment between big, medium, and little.

And then I think the third and probably the most important is. What are we doing on sort of the political advocacy side to make sure that we have the appropriate tools to be able to push forward in a way that ensures that America continues to lead and that we don't lose out on this race to China. And that's, you know, part of the reason that we have recently announced our donation to Leading the Future PAC, which will have, you know, several different entities underneath it, which I think is, is designed to sort of be that political center of gravity in the space.

And that will fight at the federal level and the state and local level. So we're, we're happy to be a part of it and I, we expect, you know, there will be others that join this sort of common cause fight on the AI side.

00:47:21 — Federal vs. state lanes, moratorium fallout, and what’s next

Erik Torenberg 00:47:12

If we could wave a wand, what would we like to be done at the state level, what we like to, versus the federal level versus how should we think about that, that interplay that compared to where we're at now?

Matt Perault 00:47:21

So I, I think there the helpful answer here comes from the constitution. Constitution actually lays out a role for the federal government and a role for state governments. Federal government takes the lead in interstate commerce, so governing a national AI market and governing AI development, we think is primarily Congress's role.

Sometimes when, when people say that, I think the, what other people hear for some reason is states should do nothing, and we have been, we've tried very hard to be very deliberate in not saying that and making clear that states have an incredibly important role to play in policing harmful conduct within their jurisdictions.

So criminal law is a perfect example. There is some criminal law at the federal level, but the bulk of criminal laws at the state level, like when you think about routine crimes, if you are going to prosecute someone, uh, prosecute a perpetrator, the, it's likely that that would occur under state law. And so to the extent we want to take account of local activity that, um, that would, where there's criminal conduct involved and we wanna make sure that the laws are robust enough to protect people from that activity, that's gonna be primarily state law.

Oddly enough, I mean, as Collin is describing, like we, this isn't the delineation that we've started out with. There are a lot of state laws that have sort of taken the approach of some, sometimes explicitly, Congress hasn't acted. So we have a responsibility to act, and that's true to some extent.

Like you can act within, states can act within their constitutional lane. Some of what states have done. Have gone outside that lane. And so we actually just this week released a post on potential dormant commerce clause concerns associated with state laws. And the basic idea there is that there's a constitutional test that says that states cannot excessively burden out of state commerce if, when it, when that greatly exceeds the in-state local benefits. And so courts actually weigh that there's a balancing test. Are the harms cost to out-of-state activity do, do those significantly outweigh the benefits on the local side? And we think that, at least for some of the proposals that have been introduced, it's likely that they won't, that the benefits are somewhat diminished relative to what the proponents think they are and that the costs are significant, like the cost of a developer in Washington state for complying with a law that's in California, or a law that's in New York is gonna be significant. And so our hope I think is not that the dormant commerce clause ends up serving as a function that makes it hard for states to enact laws, but actually just get serves as a guidepost for states around the kinds of laws that they might actually introduce.

And I think it pushes in the direction that's consistent with our agenda, which is to, to take an active role in legislating and enforcing laws that are focused on harmful use.

Erik Torenberg 00:50:07

Looking in the next six months to a year, what, what are the issues that we're most focused on or that we're thinking about are going to, you know, be playing a role in the conversation?

Collin McCune 00:50:14

I think it's first and foremost some level of federal preemption, and I wanna be very specific about this, again, to Matt's point. We're not talking about preempting all state law. We're talking about making sure that we have a federal framework specifically for this model regulation and, and hopefully how the models can be used.

Right. I think that's gonna be so, so critical because we can't, just like any other technology, no technology can live under a 50 state patchwork. And, and, and that's, that's been the biggest issue that we've been fighting over the last year and a half or so. So I think, I think that. I think that there are some other sort of policy sets that I think will be handled beyond that that I think can kick into sort of workforce training.

I think there's some literacy things that should be coming up. Obviously there's a huge robust conversation around data centers and energy that I think it will be really, really important. But above all, I think most of our time and energy will be focused on trying to have some level of federal standard here.

To try and drive the dividing line between the federal and state government, which I think Matt has already done a ton of great work on.

Matt Perault 00:51:29

I think this is just a super exciting policy moment for AI. There's the, the last couple years where I think there are a bunch of ideas that have been proposed and for the reasons that we've discussed, we think those ideas fall short, both in terms of protecting consumers and in terms of ensuring that there's a robust startup ecosystem. Most of those laws, I think, have actually not succeeded in passing. So like the, there were a number of laws introduced at the state level in the, in, in this past year's legislative sessions that we thought had of strong likelihood of, of passing and I think to date, none of them have passed. Collin has also been building out the expertise and skillset and capacity on his team. We just hired Kevin McKinley to lead our work in state policy and he, I think, will help us to take a real affirmative position in the legislative sessions ahead on what might actually be AI policy that's good for startups.

So instead of being in the position of saying no, 'cause we're sort of starting late and kind of with one hand behind our back. I think we're in a position to really actually try to articulate and advance a proactive agenda in AI that's compelling. I think Collin hit the main parts of it. Ensuring proper roles for the federal and state governments. Focusing on regulating harmful use, not development, and there are specific things that you can do there in terms of increasing capacity and enforcement agencies. Making clear that AI is not a defense to claims brought on under existing criminal or civil law and, and technical training, I think for government officials to make sure that they can identify and prosecute cases where AI is used in a harmful way.

And then all this infrastructure and talent stuff that, that Collin is describing, worker retraining, AI literacy. We've also given some thought to the idea that has been articulated by a number of lawmakers and was in the National AI Action Plan of creating a central resource housed in the federal government.

And you could also do it in state governments as well, that lower some of the barriers to entry for startups, you know, compute costs and, and data access. And we think that's really compelling in terms of ensuring that startups can compete. And that idea, like many of these is bipartisan, it's been supported by the current administration.

It was supported by leading Democrats over the last couple years. So that's the kind of thing that we are hoping that when we have the room and position to really advocate for an affirmative agenda that will get some traction in policy circles.

Collin McCune 00:53:46

We are not always in a hundred percent alignment with other people in the industry.

You know, and, and I, I, I think, I think that that's, you know, big, medium, little, you know, across the board there's other sort of like consumer advocacy groups that obviously feel differently about these things. I think for the most part. The industry is generally aligned on some level of a federal standard here and understanding that the thing again, that won't work is a 50 state patchwork.

And I think that that's super, super important because I think for the first time, you actually have this sort of alignment there. And if you have that sort of alignment, that's kind of momentum that you can to actually push things over the finish line and get something done.

And I, and I think, look, also, the Trump administration to their credit, has also been incredibly supportive of this idea too.

Matt Perault 00:54:31

There's a like. That's an incredibly important point. One criticism usually raised in sort of an implicit criticism sort of way is. "Hey, you're the little guys, but often you align with the big guys so aren't you just saying, aren't you just in favor of a deregulatory agenda that works for big tech?" And one of the things that I think is really extraordinary about the Little Tech Agenda is it's really nonpartisan and it's doesn't take a position on big little. It basically says, here's the agenda.

And when you agree with us, we'll support you. And when you disagree with us, we'll oppose you. And that's not party line. It's not big little. And so I think what we saw over a, the, the, the, the phase that Collin was referring to kind of initially in the recent set of AI policy was a phase of divergence between big and little.

Licensing regime, bigs were sort of pushing it, little was concerned about it. Then there's, then there was a period of convergence and I think actually if you look at like the National AI Action Plan comments across a range of different providers, as Collin's saying like a lot of them, they had some core similarities.

So, so lots of large companies have advocated for federal preemption. We don't oppose that just because big companies are advocating for it. We think that that's good for startups. I think it's possible. I'm curious. I mean this is really, you know, Collin really understands this in a way that I don't like, how the political chips will fall.

I think it's possible we're in a period of some divergence, and one thing that we hear repeatedly, which is sort of funny, is people will bring us stuff and they'll say, industry agrees with this, so we expect you to agree. You can't, industry already agreed, you can't disagree. And we say the big parts of the industry have agreed, but we, we sometimes we agree with them, but sometimes we have different views.

And so when we disagree, it's not because we're trying to like blow up a policy process or make it diff difficult for lawmakers who are trying to move something forward. It's because we're looking at it. We're looking at it through this particular lens. And I think. I hope it's not the case, but I think there might be more fracturing in the months ahead.

Collin McCune 00:56:25

Yeah, I agree with you on that. And by people, he means lawmakers, just to be specific.

Erik Torenberg 00:56:30

That's a great place to to, to wrap. Collin, Matt, thanks so much for coming to the podcast.

This transcript has been lightly edited for readability.

Resources

Read the Little Tech Agenda: https://a16z.com/the-little-tech-agenda/

Read ‘Regulate AI Use, Not AI Development: https://a16z.com/regulate-ai-use-not-ai-development/

Read Martin’s article ‘Base AI Policy on Evidence, Not Existential Angst: https://a16z.com/base-ai-policy-on-evidence-not-existential-angst/

Read ‘Setting the Agenda for Global AI Leadership’: https://a16z.com/setting-the-agenda-for-global-ai-leadership-assessing-the-roles-of-congress-and-the-states/

Read ‘The Commerce Clause in the Age of AI”: https://a16z.com/the-commerce-clause-in-the-age-of-ai-guardrails-and-opportunities-for-state-legislatures/

Find Matt on X: https://x.com/MattPerault

Find Collin on X: https://x.com/Collin_McCune

Stay Updated:

If you enjoy the show, please follow and leave us a rating and review on Apple Podcasts or Spotify.

Find a16z on Twitter: https://x.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Subscribe on your favorite podcast app: https://a16z.simplecast.com/

Follow our host: https://x.com/eriktorenberg

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see a16z.com/disclosures.

Discussion about this video

User's avatar