9th Cir.: Adding an “Additional Layer of Information” to User-Generated Content Is Not Immunized by the CDA
The Ninth Circuit Court of Appeals issued its opinion today in Fair Housing Council of San Fernando Valley v. Roommates.com, LLC.
Roommates.com operates just like a dating service, except that it’s for finding roommates: people enter facts about themselves and preferences about the people they want to be matched with, then the system steers people to the profiles of people who match.
However, the housing world is unlike the dating world in one crucial respect: it’s legal to post a personal ad saying you don’t want to date people who have kids, or people who aren’t Hispanic, or people who are male. But the Fair Housing Act says it’s not legal to post an ad for housing saying you won’t rent to people who have kids, or people who aren’t Hispanic, or people who are male.
(Whether this provision of the Fair Housing Act applies to people seeking roommates, rather than offering rental units in which they don’t live, wasn’t an issue at this stage in the case, and I don’t have the expertise to opine on it.)
The Fair Housing Council sued Roommates.com under the Fair Housing Act, alleging that by asking users for their roommate preferences in categories like gender, familial status, and national origin, Roommates.com violates the Fair Housing Act. Roommates.com claimed that it was immune from this suit under section 230 of the Communications Decency Act. Section 230 says that online publishers of user-generated content aren’t liable for the content their users post (subject to certain exceptions not applicable here), so long as they aren’t “responsible, in whole or in part, for the creation or development of” the content.
This raises an extremely interesting question: If I have a website which encourages users to post unlawful content, am I “responsible, in whole or in part, for the creation or development of” that content, or am I immune under section 230? In his opinion, Judge Kozinski uses a colorful example:
Imagine, for example, www.harrassthem.com with the slogan “Don’t Get Mad, Get Even.” A visitor to this website would be encouraged to provide private, sensitive and/or defamatory information about others—all to be posted online for a fee. To post the information, the individual would be invited to answer questions about the target’s name, addresses, phone numbers, social security number, credit cards, bank accounts, mother’s maiden name, sexual orientation, drinking habits and the like. In addition, the website would encourage the poster to provide dirt on the victim, with instructions that the information need not be confirmed, but could be based on rumor, conjecture or fabrication.
(Yes, I registered the domain, and www.harrassthem.com now works, linking back to the Ninth Circuit opinion. Feel free to suggest amusing, non-liability-producing uses for it in the comments.)
Kozinski expresses doubt over whether such a site would be immune under Section 230. But after posing this rich, tantalizing question, and before deciding it, the court turns away from the question of whether Roommates.com’s questionnaire itself violates the FHA.
Instead, the court held that Roommates.com’s use of the data to control access to users’ profiles and to generate emails listing only “matches,” rather than the site’s publication of an unlawful housing preference, that strips it of Section 230 immunity. By making its site obey a user’s discriminatory housing preferences, the court held, Roommates.com was no longer a “passive pass-through” or mere “facilitator of expression by individuals.” Instead, it was creating new content of its own — an “additional layer of information” it is responsible, at least in part, for developing.
I find this holding somewhat troubling. It seems that the court was concerned with Roommates.com’s implementation of its users’ discriminatory preferences. By the court’s reasoning, it seems, merely repeating something a user posted, even in edited form, is permissible, but taking actions or generating new information (such as “match” emails) based on the user’s post can give rise to liability. This raises difficult line-drawing questions, since it makes some automatic processes (like a script that deletes all the dirty words in a post) cause a site to retain its immunity, while making others cause a site to lose its immunity.
Thankfully, I think discrimination laws may be among a very small set of laws to which such a rule could apply. Taking action or generating new content based on user-generated content doesn’t give rise to liability in many areas outside intellectual property, and intellectual property laws are already categorically excluded from CDA immunity.
Still, legal rules that make the automatic parsing of data (User: “I want a female roommate”) into metadata (Server: “Hmm, perhaps I should match this user only with females”) a legally actionable event make me uncomfortable.
Sorry, the comment form is closed at this time.