There's no way the lawmakers who drafted Section 230 of the Communications Decency Act way back in 1996 could have known that it would go on to determine the role user-generated content would play in the explosive growth of the internet. Those congressmen probably also wouldn’t have guessed that Section 230 would end up, 25 years later, becoming a central sticking point in the debate over free speech online. The complex history of CDA 230 is as full of as many twists and turns as there are differing interpretations of what the law actually says.
On this episode of Gadget Lab, WIRED senior politics reporter Gilad Edelman joins us yet again to talk about the particulars of Section 230. He’s the author of this month’s WIRED cover story on this very topic. We also talk about the Facebook Oversight Committee's ruling about the company’s decision to temporarily ban president Trump from the platform.
Read Gilad’s cover story about Section 230 here. Read his story about the Facebook Oversight Committee’s decision here. Trump’s DIY Twitter feed is a thing that exists. Watch the video of Aeropress inventor Alan Adler here.
Gilad recommends Aeropress. Mike recommends the Shop app. Lauren recommends the podcast How to Save a Planet.
Gilad Edelman can be found on Twitter @GiladEdelman. Lauren Goode is @LaurenGoode. Michael Calore is @snackfight. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys.
If you have feedback about the show, or just want to enter to win a $50 gift card, take our brief listener survey here.
>How to Listen
You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:
If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for Gadget Lab. If you use Android, you can find us in the Google Podcasts app just by tapping here. We’re on Spotify too. And in case you really need it, here's the RSS feed.
Lauren Goode: Mike.
Michael Calore: Lauren.
LG: Mike does Section 230 keep you up at night?
MC: A lot of things do, but Section 230, no.
LG: Are you even a content moderation nerd?
MC: No, I am not.
[Gadget Lab intro theme music plays]
LG: Hey everyone. Welcome to Gadget Lab. I'm Lauren Goode. I'm a senior writer at WIRED. We're taping this episode really early in the morning, this week. So my voice sounds terrible. Thanks for joining us.
MC: Hi. I'm Michael Calore. I'm a senior editor here at WIRED, and I'm also taping this early in the morning because I live in the same neighborhood.
LG: Yeah, but you sound good.
MC: Well, thank you.
LG: Yeah. All right. We're going to try to get through this. We're also joined, yet again, by WIRED senior politics reporter, and we should note Josh Hawley's new best friend, Gilad Edelman, joining us from Washington DC. And it turns out when you slide into my DMs and say, "WTF, where's Gilad on the podcast?" We actually do listen, because we've invited him back again. Hey Gilad.
Gilad Edelman: Hi everyone. Coming to you from late morning in Washington DC.
LG: See, he sounds so professional.
GE: This is going to be you in three hours.
LG: I hope I don't sound exactly like that in three hours. All right, today, we are talking about free speech on the internet. And since we can't seem to get away from Facebook news, in the second half of the show, we're going to talk about a big decision that was made by the company's Oversight Board this week. And it involves the man who… Well, we haven't talked about very much since last year, but who occupied a lot of space in our collective minds for four years. But first, let's talk about Section 230. Now, you might be thinking, "Oh, OK, this is going to be a boring episode," but it's not, OK? This is going to be really good and this is important because if you spent really any time on the internet, you've probably heard about Section 230 or it's affected your experience on the internet.
It's a piece of legislation that was passed in the 1990s that prevents an online platform from being liable for what its users post on the platform. And it's been in the news a lot these past few months, being both propped up and attacked from all sides of the political aisle. And while the law is having quite a moment right now, it's still somewhat misunderstood. Now Gilad here has spent months, literally months because that's how we do features at WIRED, writing a big cover story about Section 230. So now I'm going to ask him to just recite the law from memory. Go ahead, Gilad.
GE: Oh, I thought I was going to read the entire story out loud, like a sort of audio book episode of the Gadget Lab.
LG: Spoken word style, please.
GE: I don't know how facetious you were being, but we certainly can read the most important part of Section 230 out loud because it is pretty short and it says, no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
LG: Now, this Section 230 is actually couched in the Communications Decency Act, right? Which is the law that was passed the '90s. Talk a little bit about the history of the CDA and then how Section 230 came to be.
GE: Sure. So the CDA, the Communications Decency Act, was this big bill that was itself part of an even bigger telecom regulation bill. And then Section 230 was a section tucked into a bill.
LG: So it's like a turducken of law.
GE: Yeah, it was the duck.
GE: So the Communications Decency Act was this sort of censorious reactionary bill that basically tried to outlaw obscenity on the internet, but that got struck down almost immediately by the Supreme Court on 1st Amendment grounds. But the main thing that survived, that the court allowed to remain part of the law, was this thing called Section 230. To understand Section 230, we've got to go back a little bit. So what happened was, in the early days of the internet, it wasn't really clear how legal liability rules that applied in the analog, in our long history of law in the analog world, would apply online. And particularly there was a question about something called the republication rule of defamation law. So if I say something false and defamatory about Calore, he can sue me. If the newspaper quotes me saying that, he can also see the newspaper.
MC: That's right.
GE: Go get 'em Calore. So in the early days of the internet, there were cases that sought to interpret that principle for these new things called websites.
LG: Right. And one of them involved CompuServe and another involved Prodigy. And for those of you who aren't old enough to remember those, just Google them.
GE: Right. So these are the early ISPs, which I am old enough to remember. When we got our first dial up modem, I think we had Prodigy. So, the first case involved, one of these ISPs hosting a message board and someone posted something nasty about someone else, that someone else sued the company for hosting that message. And the court said, "The republication rule does not apply here. They're not like a newspaper that published something. They're more like a distributor or a bookstore. They're just hosting this bulletin board. They're passive." OK. Then another case rolled around, in a New York state trial court. So really a low level, the lowest level type of court. And in this case, the investment firm, Stratton Oakmont, sued Prodigy. Similar scenario. We're talking about a message board and somebody posted something mean about Stratton Oakmont, calling them liars and criminals.
Now parentheses, they were. This is the firm that is depicted in the Wolf of Wall Street. So these allegations were actually true, which is a funny sidebar, but they sued Prodigy. And in this case, it didn't go so well because Prodigy had advertised its ability to maintain a family friendly environment. It was trying to market itself as we have moderation tools, and the judge held that against them. They said, "Well, if you're moderating, that mean you know what's on your platform, and that means you have a responsibility to deal with it. You can't just walk away and say, 'Oh, we're just a passive distributor.'"
And the consequences of that ruling were really scary for the young internet industry, because it created what has been called the moderators dilemma, where it's like, "OK, so you're saying we have to either turn a completely blind eye to everything that's on our platform, no matter how gross, or offensive, or whatever, or if we try to do the right thing and impose some moderation and try to make it a cleaner, or a safer, whatever environment that's going to put us at legal risk?"
That was a really perverse set of incentives because you're choosing between a complete ugly free for all, or way overly heavy handed moderation. Because if you're going to take on big legal risk, every time something bad goes up, you're just going to let very few things up in the first place. Section 230 was written to solve the moderators dilemma. And it was written by two members of Congress, Ron Wyden, and a Democrat, who's now a Senator, and a guy named Chris Cox, who's not in Congress anymore. And-
LG: Not to be confused with the Chris Cox of Facebook.
GE: Exactly. Or the actor, Chris Cox. Am I making that up?
LG: There's an actor named Chris Cox?
MC: Yeah, there is no.
LG: Oh, all right.
MC: I'm sure there's several other people in several other professions with that name as well. But we're talking about the former Congressman, right?
GE: Right. Congressman Chris Cox did not go on to be an early Facebook employee. So any huddles? Right. They met for lunch and they hashed out a plan, "How are we going to deal with this?" And what they planned to solve the moderator's dilemma was Section 230. There are two key parts. We're going to come back to the part that I already read out for you at the beginning of the show. There's another part, and it says that companies are not liable for taking good faith efforts to moderate their platforms. That solves a moderator's dilemma, because it says, "Don't worry. If you make the effort to get rid of objectionable stuff, that's not going to make you legally liable in court." They also passed the first part of the law, which I mentioned earlier, which says that someone like Prodigy that hosted a bulletin board or something would not be treated as the publisher or the speaker of something that a user posted, which means we're not going to stick with this republication rule that applies to traditional media.
LG: Right. And just to bring it to modern day, we as WIRED, our traditional media, right? We're publishers. But if you look at something like Facebook, or Twitter, or Reddit, despite the fact that a large number of people, not only in the US but in the world, get their news from those platforms now, because it's being distributed on those platforms, those modern day internet platforms are not liable for what's published.
GE: Exactly. It's hard to deny that you couldn't apply the exact same rules that apply to WIRED, to Facebook if you wanted anything like Facebook, or Reddit, or Twitter to exist at any kind of scale, because the amount of content that's going up on those sites is many orders of magnitude, more than what WIRED publishes every day. So it wouldn't make sense to hold them to this same strict standard that a traditional publication is where … which is basically, if we publish something defamatory, we are responsible for it.
LG: Mm-hmm (affirmative).
GE: We can't say, "Oh, sorry. Got that one wrong. We'll let you a correction." We can be on the hook and so we have to be very careful not to do that. Where things got a little weird with Section 230 though, is that there's a lot of room in between the rules that apply to WIRED and traditional publishers, and no rules at all. And Section 230, instead of letting some compromised position develop to apply to these new types of platforms and mediums, just turned the dial all the way to the other end and said, "You're just not going to be liable."
MC: That's the thing that's become the biggest issue, especially among lawmakers, right? Because if people can express far right views or far left views, or post whatever meme they want about anybody, then it's going to make people upset, they're going to want to sue, they're going to get angry that they can't sue, and then they're going to want to get the law changed. And anybody who has been following this knows that there are a lot of US lawmakers who are angry about Section 230, but when they talk about it, it's pretty clear that they don't really understand it and they don't really know what repealing it or amending it would mean. So I'm hoping you can tell us a little bit about this confusion.
GE: Absolutely. So there are three main camps here. Two of the camps hate Section 230 for different reasons. So both Joe Biden and Donald Trump have said, Section 230 should be repealed. Biden said it once in maybe a hasty moment whereas Trump made a whole campaign thing about it last year. But they've both said it, and in this respect, they represent the liberal and conservative version of, "I want to kill Section 230." The liberal version is Section 230 is letting people get away with telling lies about us on the internet. The conservative version is kind of the flip side. Donald Trump wasn't mad that platforms were leaving too much stuff up. He and other conservatives are mad that the platforms are taking too much stuff down. So the two different parties have equal and opposite bones to pick with the two different parts of the law.
Those are two camps, both pointing their guns at Section 230. And then there's the third camp, which says, "Don't touch a hair on Section 230's head, because if you do, you'll just ruin everything that's good about the internet. If you pull this brick out from the foundation of internet law, the whole thing will come crumbling down. We won't have free speech, do hate free speech, and we end… Guess what? You'll also just ruin the economy, because there's these big companies that are so profitable and it's wonderful that they're profitable, and they're profitable because they host user generated content." None of these are correct. There's a nugget certainly in the Joe Biden critique in the sense that yes, as I discussed in the article, there are times when people get away with the kinds of lies that actually raise legal responsibilities risk. Most of the time lying-
LG: When you say that people … Just to clarify, you mean the platform owners?
GE: Exactly. Platforms that host and-
LG: Like the people who run Facebook basically.
GE: Right. The platforms like Facebook or whoever else, that host this content make money by hosting the content, and having people pay attention to it, and target ads on that basis. They don't have to worry about the subset of things that people say online that can give rise to legal liability. So the thing that a lot of the Democrats partaking the law need to really understand is that most things that people say online are just perfectly legal. They're protected by the 1st Amendment, right? It's not illegal to lie, or be an idiot, or just disagree, or be extreme, or anything like that. However, there's a subset of all of those adjectives that is illegal or that you can be sued over, right? When you defame someone, Mike Calore murders puppies, that's not true, but if I said it as if it were true, I'm damaging his reputation.
Other things like harassment, threatening someone, doxing someone, right? So that type of stuff is where platforms … it's really left up to them, to self-regulate. And some of them self regulate pretty well for some of these things and some don't try at all, and many are in the middle. So that type of Democratic critique really overstates what you could accomplish if Section 230 was not there. The Donald Trump critique is wrong because there's just … It's really Ted Cruz, the Republican Senator from Texas who popularized the idea. The first time this came on my radar, he was grilling Mark Zuckerberg in 2018. And he said something like, "Section 230 immunity is predicated on the condition …" This is how he talks. "… of partisan neutrality," right? That you have to treat liberals and conservatives equally. And that's just made up, it's not in the law, and it's not illegal to have a left-leaning or right-leaning social media platform.
It's true that you have to do your moderation in good faith. And so, that's the one little nugget that this critique is building off of, but there's just nothing about the law that says you have to maintain partisan neutrality. So that's why the two loudest political critiques are both dumb. Now, those are easy to deal with. What I was much more interested in engaging with, in the article, was the most robust, absolute to Section 230 defense. Is it true that if you peeled back Section 230, you would just take the internet and all the good stuff about it, crumbling down, in addition to whatever bad stuff you think you're addressing? And the more I looked into it, the more I decided not really.
MC: How so?
GE: So what would have happened if there was no Section 230? Remember we started off by talking about forms of liability in law that existed before the internet, right? America like other former British colonies, has the common law, tradition of law, where the rules of law are shaped over time in cases that judges rule on, and over time, applying existing precedents to new situations, the law develops. And of course, legislatures also pass laws and effect, like Section 230, for example. So it's not purely judges, but that's like the meat of civil liability, who can you sue? Who can sue you? has evolved over time this way. Now, we talked about the case that created the moderators dilemma in the '90s. There's no way that that would have been… just become the default law in the nation.
It was one state level trial court. And his decision became instantly controversial because everyone was like, "Whoa, this will be a disaster." And so if 230 hadn't been passed, it's possible that there would have been more painful decisions, but by far the likeliest outcome is that higher courts and courts, as were in the country, would have come up with more thoughtful ways to define what are the legal responsibilities of these new internet platforms. Section 230 put a plug in that and didn't let that evolve.
LG: Well, you mean, they're basically formulating a law that would potentially apply to businesses that did not even exist yet. So it would be hard, I think, to have established the appropriate law at that point in time. Gilad we do have to move on to the next segment, but very quickly tell us what's going to happen. I mean, Capitol Hill has been buzzing about Section 230 for a while now. So is there any proposed amendment or new legislation out there that actually has legs?
GE: There are lots of bills that have been floated that have a greater or lesser degree of going anywhere. There's a bipartisan one called the PACT Act, which has some modest, pretty good ideas in there, especially around, at least making companies be transparent about how they moderate content. There's also a proposal that I write about, in my story, that has not yet turned into a bill, but I think is very interesting. It's from a professor named Danielle Citron. And her idea, it's elegant. She basically says, "Let's just tweak the immunity part of the law so that it only applies to companies that take reasonable steps to address or remove whatever kind of content is being sued over." You wouldn't have to prove every single time that you handled the thing the right way, you would just have to show a judge, "Look, we have a reasonable system in place to handle these problems, even if this one slipped by us."
And I think that's a pretty interesting compromise position because it recognizes every platform is going to screw up. Sometimes you can't have perfect enforcement, but at least this will definitely wipe out the bad faith companies out there. There are sites that exist on the internet basically for the purpose of having people post revenge material on their ex-girlfriends, or just ugly untrue rumors about people, or slander businesses to try to then… so that the website can then shake those businesses down. This is like the dark underbelly of stuff. Forget about Facebook and YouTube for a second. There's this dark underbelly of bad actors who benefit from Section 230. And so, if you at least said, you have to have some reasonable procedures in place that wipes out the bad guys, because if you're trying to do wrong, then you're not being reasonable. And then it would also hold the bigger players and these sort of sloppy, but not malicious actors, more accountable for things that they don't have to worry about currently.
LG: All right, everyone should go read Gilad's story about Section 230 on WIRED.com. And if you are a print subscriber to WIRED, you can also read it in the June issue. It is the cover. It's great. It is, I think, one of the best distillations of Section 230. I've read, but I'm a little bit biased, of course.
GE: But you're biased against me though. So this means even more.
LG: Right. Yes. I should clearly be moderated. All right. Stay with us. We'll be right back after this quick break.
LG: Perhaps just the most high profile example of the debate over platforms and their users happened earlier this year when just about every social media site blocked former president Donald Trump, after he allegedly helped incite the riot on the Capitol in early January. So Facebook has this oversight committee, right? And it's this supposedly neutral entity within Facebook that has the power to override even Mark Zuckerberg. And this week, on Wednesday, this Oversight Board made the decision to uphold Facebook's ban of Donald Trump's account for now anyway. But obviously, as we've just talked about, this is a lot more than just a simple yes or no decision. And Gilad, somehow, as you were finishing this cover story about Section 230, you also wrote a story about Facebook's Oversight Board. So tell us what the committee said and essentially how they arrived at this decision to uphold the ban on Trump's account.
GE: They sort of upheld the ban. It was kind of a Solomonic decision. Of course, Solomon didn't really split the baby, that was a ploy by him, but he's remembered for splitting the baby.
LG: We're going to have to unpack that in another episode.
GE: Yeah. Come back for Bible study with Lauren and Gilad.
LG: We can divide it into Old Testament and New Testament.
GE: You know I'm on that Old Testament. Because I'm Jewish. So the question
LG: We know it.
GE: This is for the listener because they can't see my curls. So the question that Facebook posed to the Facebook Oversight Board was, "Were we right to suspend Donald Trump's account indefinitely?" And the Oversight Board said, "You were right to suspend his account, but you were wrong to do so indefinitely." And then the last interesting thing is the board didn't say, "Here's the right duration to ban him." Because the board could have said, "You were wrong to ban him indefinitely and now time's up. You have to let them back." Or they could have said, "You were wrong to ban him indefinitely, you shouldn't have banned him permanently." It didn't do that. It said, "You, Facebook, have to apply your rules, which don't include indefinite bans and come to a decision here, and you have six months to do so." So in that respect, it kicked this ultimate decision, the resolution of the Donald Trump Facebook saga down the road.
MC: So who is on this Oversight Board? Do we know?
GE: We do know that the membership is public. It's a mix of law professor types, and former judges, and human rights people. I can't rattle off the membership by memory, I'm afraid, but it is a pretty respectable bunch of people that tries to draw somewhat from different parts of the ideological spectrum and incorporate voices from around the world.
MC: So Facebook has this thing called the newsworthiness rule, which generally gives political leaders extra leeway to say things on Facebook that not just anyone is allowed to say. It basically lets Facebook bend the rules if you're a world leader, because what you say is newsworthy and it really matters in the real world. That means that Facebook is going to wait longer to pull the plug on anyone in a position of political power. What did the Oversight Board say about this policy?
GE: This was probably the most interesting part to me. So the Oversight Board, what it tells Facebook to do about Trump's account is binding. So Facebook does have to do what the board says. Facebook also asked for the board's advice and the board has sort of freewheeling power to just recommend policy changes, that Facebook is not required to follow. And so, on that front, the board said, "Basically, Facebook, it was good that you took into account the real-world context of January 6th and that you factored in what was happening when you were judging the risk of harm coming from Donald Trump's statements." Personally bracket. This is me speaking again. I think there's a big question about whether the statements that actually got Trump banned did the bad things that Facebook is saying it did, but that's a side note.
The Oversight Board agreed that because Trump was saying nice things about the rioters, even as he told them to go home, and because he was repeating the lie about the stolen election, even as he was telling the rioters to go home, the Oversight Board said, "Yeah, in the context of the ongoing violence, the fact that he was praising these people, the fact that he was repeating the stolen election thing, that created a risk of harm, and so this is the right way to think about it. It's not just when somebody posts, it's about what's happening in the world." And then the second part of this is, they said, "Facebook, you should recognize that political leaders have a bigger microphone. And so, that can raise the dangerousness of what they do." And so that is arguing for the opposite of the newsworthiness exception, because the idea behind the newsworthiness exception for public figures is people have a right and an interest in knowing what a public official like Donald Trump is saying.
But what the Oversight Board is saying is, actually everybody should be subject to the same set of rules, and when you're making this risk assessment, you have to recognize that the more prominent, powerful people have greater ability to cause harm. And so, if anything, the implication there, is that it should have a quicker hook on violations by people with huge platforms. And a lot of people who commented on this case, from conservatives to liberals, I found examples of people urging the Oversight Board to go in this direction because it's pretty intuitive, frankly. If I post something to Facebook that is inciting violence or something, as stupid and bad as that would be, no one's going to listen to me.
GE: But if Donald Trump does it, or Angela Merkel does it, or Benjamin Netanyahu, or Rodrigo Duterte, or whoever, with millions of followers and political support, the real world consequences have the potential to be much greater. So what I'm really curious to see is how does Facebook responds to that policy suggestion?
LG: Right. So Facebook has been using this idea of newsworthiness to give these leaders a little bit extra leeway. It probably error on the side of leaving things up, whereas the board is arguing actually it's these people who have such great influence over real life actions and behaviors that it needs to be addressed differently and perhaps more quickly. Now, a lot of this conversation, of course, is focused on Facebook because we're talking about the Facebook Oversight Board, but there's nothing stopping Donald Trump from going off platform, going to another platform and launching a website, or… I guess he can't go on Twitter at this point in time either. But tell us what he's doing now and what happens if you runs again in the future, right? Where do you envision his content being shared?
GE: Well, he did launch his own website called From the Desk of Donald Trump, and it's-
MC: Best website name ever.
LG: I have to say, I've not been to that website.
GE: Imagine Twitter, but the only user is Donald Trump, and there's no other features. That is what this site is. It's a microblog. It's a place where he can tweet his … Well, let's see what's up there now. I'm going to go to donaldjtrump.com/desk.
LG: That's really what the URL is?
MC: I'm sorry.
GE: Not a joke. That is the real URL, and … "Congratulations to the great Patriots of Windham, New Hampshire, for their incredible fight to seek out the truth on the massive election fraud, which took place in New Hampshire." I'm not going to read the rest of it. If you're missing that in your life, that's where you can go to get it. And I find the existence of this, it's obviously humorous. It's also interesting because it proves simultaneously that Facebook can't literally sensor Donald Trump. Facebook and Twitter have kicked him off their platforms, but he's still able to make his statements online.
But on the other hand, it's not going to have the same juice as … Honestly, forget Facebook. We all know that what really gave Trump statements juice was Twitter. And the biggest reason is us, is journalists. Journalists are hooked up intravenously to Twitter in a way that very few normal people are. We're obsessed with what we see there. The platform has design elements that make it really easy to comment on things, and amplify them, and then laugh at them, or not laugh at them, and Trump just tapped into Twitter so well, and that was just a way to really inject himself into the media discourse. And, of course, he was the president of United States and he would tweet, "I'm firing the secretary of state," on Twitter. And so, to some degree, the news media had no choice, but to cover it, but it made it a lot easier to get our attention by going where we were already, in a way that journalists are not checking Facebook for what Donald Trump has to say there, and certainly are not going to be checking from the donaldjtrump.com/desk.
The other thing, of course, is that he's not president anymore. As much as everybody loves to talk about Donald Trump, see, for example, this podcast, he's not going to command the same level of media coverage when he's not the president, which gets us to your question, Lauren, what if he runs again? To which, I say first, can we all just take a breath and enjoy a moment of relative, of quiet. Not making a political statement, I'm just stating a preference for calm. This issue will have been resolved by then. The Facebook decision will have been resolved by then. He will either be permanently banned, which I am skeptical of, but we'll see. I really don't know, because this decision surprised me in the first place. Or he'll be allowed back and then maybe he'll break the rules again and get kicked off again.
So my point is just, there's going to be more twists and turns in this story before we even get to 2023, when he will … if he decides to be a candidate again, and as a free man, will be a candidate again. And I would not have predicted the Oversight Board would have issued the decision it did. I thought they were just going to say that, "Emergency is over, you got to let this guy back now." And so for that reason, I'm not going to make a prediction here, because I just don't know.
LG: All right, Gilad. We're going to take another quick break. And then when we come back, we're going to talk about the real reason we brought you on this week show. So stay tuned.
LG: Gilad, what is your recommendation this week?
GE: My recommendation this week is for anybody who likes iced coffee but finds the prospect of making it at home overly elaborate, which is Aeropress. So I don't usually come on here and show for specific brands. You know that. I'm Mr. Generic.
LG: Mike right now it's just giving the thumbs up sign and nodding vigorously through the Zoom.
GE: It's a very small thing. It does not take up a lot of room in your kitchen at all. Basically, it's a tube with a plunger and a filter. And it is a way to make something almost espresso really fast without a lot of equipment. I say almost espresso because it doesn't generate the same pressure as an espresso machine, but it does do, indisputably, is generate a small volume of concentrated coffee, really fast. You boil some water, you pour it in, you stir, and then you plunge. The cool thing is that then you have the small volume of concentrated coffee. Then only need to do is take your glass with your ice and your milk, and pour that in, and fill the rest of it with water and you've got really tasty ice coffee that you made really fast and the Aeropress process imparts less bitterness to your brew, that's its biggest selling point.
Although I really don't mind traditionally brewed coffee myself, but the Aeropress does have a cleaner, maybe a little bit more sour, but less bitter taste, which I find really good for iced coffee. So you just make your iced … I sound like such a complete nerd here. I'm not even a coffee aficionado. I just like iced coffee and I don't like having to go out to buy it. So that's my recommendation. Also-
MC: The fact that you're using Aeropress means that you are actually a coffee aficionado.
GE: No. Can I be honest with you?
LG: I mean, that's what we're here for.
GE: The way I got the Aeropress was … there's a YouTube video that I saw featuring the inventor of the Aeropress, this guy named Alan Adler. Who's just the definition of avuncular, kind of a nerdy, kindly old uncle type. And it's just him in his kitchen making Aeropress for five minutes and explaining it and I was just so charmed by it. I was like, "How much is this? 20 bucks? All right. I'm buying one." He also invented the Aerobie flying disc.
MC: Same guy?
GE: Yeah. This guy's got range.
MC: Wow. We should knight him.
GE: We should have him the show. And by we, I mean you, because I'm still not officially a host.
LG: I like how he says still not, as though he knows it's-
GE: Yeah, I'm not yet a host.
LG: Thank you for that recommendation. A very thoughtful one this week, Gilad. Mike, what's your recommendation?
MC: Now, I'm going to recommend an app, and it is one of the most popular apps in the app store and in the Google play store, but I first encountered it this week and I absolutely love it. It's called Shop. So S-H-O-P, shop. It's from the people at Shopify. It's an app that you download that, if you're shopping at a website that uses Shopify for checkout, you can hop into the app and complete your checkout there. So it acts like a wallet of sorts. But the thing that I really love about it, is you can connect it to your Gmail inbox, and it watches tracking numbers as they come in, and then gives you a master list of all of your tracking numbers. So if you order a bunch of things, let's say, you ordered toilet paper for your home, cat food for your cat, a gift for mom, and-
LG: An AeroPress for you.
MC: An AeroPress for me, it will show you all of those things in the app. So it's like an aggregator of all your tracking information, and then you can tap on it and you see a very sort of deluxe notification that shows you where the item is on a map, when the expected delivery is. It shows you all the steps and it combines multiple services. So you have like FedEx, UPS, USPS, DHL, they're all in one place. That is the best thing about it. It's free. So even if you don't buy a lot of stuff on sites that use Shopify, you can still get value from it if you buy stuff online, which, I assume is just about everybody who's listening to this. So that's my recommendation. Shop, check it out.
LG: Cool. Yeah, Shopify has been doing some pretty interesting stuff.
MC: Oh well, I'm sure none of it is as interesting as what you're about to tell us about Lauren.
LG: Well, my recommendation might make you want to shop less. I'm pretty sure I've recommended this podcast before, but I had the chance to listen to a few more episodes this past weekend, and so, I wanted to recommend it again. It's called How to Save a Planet. It is a Gimlet podcast, hosted by a marine biologist, Ayana Elizabeth Johnson, and one of the Gimlet guys, Alex Blumberg. I really like this podcast, and in particular, the episodes that I've listened to most recently, one had to do with whether or not your personal carbon footprint, the whole idea of it, is BS? And so, they argued both sides. It was a listener question that was submitted and the listener's sibling was arguing that, "No, your own personal consumption doesn't matter that much. We have to have policy changes. There are too many systemic problems. That's how we're going to address climate change."
And the listener was like, "No, I really think people should be eating vegetarian or vegan, and driving EVs, and recycling," and which side is correct? And so, the hosts really, I think, effectively argued both sides of the equation and I found it to be a fascinating episode. These are from March, but I was catching up. And then I listened to one about beef as well. And what's the beef with beef? How much should we be cutting back on red meats? It went into the history of the farm bill and how the American system of farming subsidies has basically informed a lot of the ways we consume food, and basically whether you should stop eating those burgers, if you want to help solve climate change. And once again, it's a very nuanced topic. There's no real clear straight answer, but I think the host did a really good job of unpacking these things. And so, I recommend listening to How to Save a Planet.
MC: Nice. I'm going to go give that a shot because I love the beef debate as a lifelong vegetarian and now vegan. I think it is fascinating.
LG: It really is.
GE: Wait, sorry. LG, you said both sides of the debate so many times that my eyes glazed over or my ears glazed over. What did you decide? How did it make you think about your own behavior here?
LG: One of the conclusions that the hosts arrived at, particularly in the personal carbon footprint episode, is that the best solution might be just to use your own personal influence, whatever your sphere of influence might be, whether that's just you and your friends or family, or whether maybe you have a little bit of a larger platform like we tend to have by working in media, to just start the conversation about it more and talk to people about it more, because if one person gives up meat or one person drives an electric vehicle, you're not making that much of a difference because they do these calculations in the show and they determine that your individual actions actually count for a tiny amount of impact. They even say, at some point on the show, like, if you were to die tomorrow … Sorry to be dark, but the fact that you are driving a Tesla is not going to make that much of a difference in the world.
Right. But it's actually, when you can start to affect changes in your own community, petitioning to electrify your local town or city, getting five people to give up beef, convincing your friends to take public transit with you, that you start to affect things a little bit more broadly. And then, of course, we do need large scale policy changes in place, in order to actually get our carbon emissions under control. So there was like no straight answer, but I guess I arrived at the… I personally arrived at, "OK. Maybe I should go back to eating meat last," because there was a period of time where I wasn't eating meat for years and then I started sneaking in it again. I have been thinking more about electric vehicles because I currently don't drive an electric vehicle, and I feel pretty badly about it. And I started thinking about my own individual actions, but I actually started to think more critically about, "OK, but how can I work with groups or people more broadly to think about like how we can affect these changes?"
GE: Does that mean, it's similar to, any collective problem has this characteristic to it. Even if you think about voting, your individual vote is just never going to decide the election, and so you could say it's … You could take a strong version of that and say, "Spend the time you would have spent voting just reminding 20 people to vote and telling them who the right candidate is," but there's another justification for voting, which is that, it's more of an ethical question, which iS, what is the right thing? What would you want everybody to do? And it's well participate in democracy, and that's where the climate behavior stuff gets a little challenging whereas I'm completely sympathetic to the idea of the caller on the show that isn't it all systemic and it's all about policy change, but then there's a other question which is more, one of ethics than what's actually going to solve the problem. Sorry. I don't know why we had to get all deep here. It sounds like a great podcast, Lauren.
LG: Yeah. And it's OK. We like going deep on this podcast, and I hope that everyone enjoyed this podcast as much as they enjoy the recommended podcast. So-
GE: I think we hope that they enjoyed this one more, but that one also.
LG: Sure. Yeah. Listen to this one first and then kick out to that.
GE: Have an iced coffee, buy some shit online, listen to this podcast, five stars, listen to that one, four stars, move on.
MC: I'm sure at this point, nobody's listening to this show anymore. So-
LG: All right. That's our show. Gilad, thank you again for joining us, our unofficial third co-host. Everyone go read Gilad's cover story.
GE: Bye guys.
LG: And thanks to all of you for listening, especially if you've listened this long, and especially if you've listened to our seven o'clock in the morning voices. If you have feedback, you can find all of us on Twitter. Just send all of your complaints to Gilad. Check the show notes. We'll put our handles in there. This show is produced by the awesome Boone Ashworth who deals with our bad jokes every week. Goodbye for now. We'll be back next week.
[Gadget Lab outro theme music plays]