Wednesday Links -- 18 November 2015

Here are some of the things I found interesting this week. If I had to pick a theme, it'd be Historical Connections. The Atlantic talks about our changing perception of suburban development. Compare Bloomberg's article about China's second-generation douchey rich kids with the Awl's history of Esquire magazine and the making of the American bachelor. Seattle Met has a really good article about the history of Seattle's craft beer scene in light of Elysian Brewing's recent acquisiton by a megacorp.

There's a new attempt to clarify academic writing. Well, it's been tried before:
Lester S. King, MD, has produced the best book on scientific writing I have read (and I think I have read most such books as well as many essays intended to guide would-be writers). The 11 chapters flow easily and readably from first ("The Present Scene") to last ("Setting Up a Course in Medical Writing"). Each chapter can be read profitably as a unit, yet remains part of the whole, valuable alike to teachers, editors, writers, and would-be writers.
This review comes from 1978. So while Why Not Say It Clearly? is a damn fine book (I own it and have read it), I don't think it had a strong, lasting corrective effect on jargonization in academia.

The LessWrong / Effective Altruism communities have a general affinity for "nootropic" (cognition-enhancing) drugs like modafinil. And while I'm a proponent of human enhancement in principle, let's just say I won't be an early adopter. This 2009 New Yorker piece gives some insight into the incentives created by nootropic and amphetamine use. Needless to say, this needs some serious philosophy. Not everyone who goes hard on Adderall or Provigil is going to become the next Paul Erdős:
His colleague Alfréd Rényi said, "a mathematician is a machine for turning coffee into theorems", and Erdős drank copious quantities. (This quotation is often attributed incorrectly to Erdős, but Erdős himself ascribed it to Rényi.) After 1971 he also took amphetamines, despite the concern of his friends, one of whom (Ron Graham) bet him $500 that he could not stop taking the drug for a month. Erdős won the bet, but complained that during his abstinence, mathematics had been set back by a month: "Before, when I looked at a piece of blank paper my mind was filled with ideas. Now all I see is a blank piece of paper." After he won the bet, he promptly resumed his amphetamine use.
I do, however, love me some beer, which is why I had some very mixed feelings when I heard that Elysian Brewing was getting bought out by AB-InBev, that is, by the makers of Bud Lite (and the abomination that is Bud Lite with Clamato).
 This Seattle Met article about Dick Cantwell, one of the founders of Elysian, is a powerful read, almost a Greek tragedy.

Two articles on non-trends in technology. Aeon Magazine asks, "Why have digital books stopped evolving?" while Idle Words explains how the Wright Brothers stymied the development of airplanes for decades after Kitty Hawk, exposing the fundamental failure of the patent system:
The whole point of patents is supposed to be to encourage innovation, reward entrepreneurship, and make sure useful inventions get widely disseminated. But in this case (and in countless others, in other fields), the practical effect of patents turned out to be to hinder innovation - a patent war erupts, and ends up hamstringing truly innovative technologies, all without doing much for the inventors, who weren't motivated by money in the first place.
After all, if the patent system failed small-time tinkerers literally inventing stuff in their garage... how could we even say it works?

Hemant Mehta, the Friendly Atheist, is also a mathematics teacher. He also has the thankless task of rebutting ignorant parents whose unreflective anger at Common Core math standards goes viral. Why would a math teacher punish a child for saying 5 x 3 = 15?
If you’re looking at the exam and thinking “The kid got the right answers and that’s all that matters,” well… that’s why you’re not a teacher. I was in front of a classroom for several years and I know it’s entirely possible for a student to get the right answer without solving the problem correctly. Sometimes, that’s because of dumb luck. Sometimes, like in this case, the student just didn’t understand what the teacher was looking for.

I’m not saying I would have taken a full point off each of those questions, but the teacher wasn’t wrong to correct the student. The points didn’t come off because the answer was wrong. The points came off because the process the student was using won’t be helpful in the future.

Another Aeon piece, this time at the intersection of philosophy and cognitive science: Free Will Is Back, and Maybe We Can Measure It

Don't Look Now, But This Guy Just Put Buzzfeed and Upworthy Out of a Job: Generating clickbait with recurrent neural networks. With a bonus website called "Click-o-tron" where all content is generated by said RNNs.

The revolution in vat meat continues apace: Scientists Say Lab-Grown Meat Will Be Available to the Public In Five Years

This treacherous 220V flash drive can fry your computer in seconds: "The USB killer v2.0 features a DC-to-DC converter that charges a set of capacitors hidden inside once it’s been plugged into a USB port. That energy is then redirected back into the device as a 220-volt electric surge, again and again, until the hardware completely fails."

Timothy B. Lee (no relation) at Vox is skeptical about a new "Bitcoin computer." The principle is interesting, but of course the big problem is that it's Bitcoins.

Bloomberg Business wins at headline puns. Children of the Yuan Percent: Everyone Hates China's Rich Kids. Super-wealth induces absurdity regardless of culture, it seems:
Emerging from a nightclub near Workers’ Stadium in Beijing at 1:30 a.m. on a Saturday in June, Mikael Hveem ordered an Uber. He selected the cheapest car option and was surprised when the vehicle that rolled up was a dark blue Maserati. The driver, a young, baby-faced Chinese man, introduced himself as Jason. Hveem asked him why he was driving an Uber—he obviously didn’t need the cash. Jason said he did it to meet people, especially girls. Driving around late at night in Beijing’s nightclub district, he figured he’d find the kind of woman who would be charmed by a clean-cut 22-year-old in a sports car.
There's actually a lot more to unpack there, from the traditional Chinese focus on family name over individuals, to the psychological legacy of the Cultural Revolution. That said, one has to wonder, why do Silicon Valley founders and venture capitalists act so similar to this? America didn't have anything like the Cultural Revolution...

Related: The Awl has a fascinating little history of Esquire and the changing definition of the unmarried American man. There are some incredible excerpts from Esquire's Handbook for Hosts, including why men are inherently better than women at cooking, especially fish, because women don't understand fish(???). Any connections to the modern "Red Pill" and "Men Going Their Own Way" movements are left as an exercise for the reader.

Dylan Matthews at Vox makes a case against "equality of opportunity." No surprise, but he suggests a basic income instead. Can't disagree there!

A writer for City Observatory in the Atlantic: How Tasteless Suburbs Became Beloved Urban Neighborhoods. I was already willing to accept this premise, not just because I'm a filthy urbanist sympathizer but also because I saw some photos of new suburban developments in Seattle from that era (and the Boomer era) and yeah, they're ugly. Mostly by dint of so much land clearance that it's just houses in a sea of dirt. Also, the "Whites Only" housing covenants. The Atlantic piece rightly notes, however, that there's not really an "objective" standard for urban development. In addition, I think this provokes some interesting questions about the role of historical laws in shaping future preferences.

REVIEW: "The Blade Itself" -- The book itself incites men to reading

The Blade Itself is the first book in Joe Abercrombie's First Law trilogy, which is blessedly finished unlike some other notable fantasy series. Abercrombie came enthusiastically recommended by one of my fellow undergrad math tutors at Western Washington University; it's been something like four years between recommendation and finally picking the damn thing up.

So what kind of book is The Blade Itself? One of my favorite kinds: it's gritty, low fantasy with extra pulp! Abercrombie's style reminded me somewhat of George R. R. Martin, with a bit more of an ironic voice to the narration (it helps that most of the POV characters take an ironic view of life), and with far less in-your-face ill fortune. In other words, POV characters don't die at inopportune moments—i.e. just when you're getting invested—and yet by the end of the book there are more than enough looming dangers to keep you waiting for the next installment.

One amusing bit: I listened to the Audible audiobook, which is really well narrated by Steven Pacey... except that he's now ruined me on the pronunciation of the names! With his natural English accent, Pacey's pronounciation of "Inquisitor Glokta" sounded to me more like "Inquisitor Glocter," and "Ardee West" more like "Adi West" (or maybe "Addie")... which spellings I prefer more. So dang.

The story follows several POV protagonists around and across a vaguely-Western-European-feeling landscape, from "The North" of savage barbarian clans and swarming totally-not-orcs called Shanka, to the sort-of-Holy-Roman-Empire called the Union, highly political and lapsing into decadence. There's brief mention of other lands, too: the Ottoman-Persion-sounding Gurkhul and the Free Cities of Styria and so on. I don't think the actual map of the world resembles Western Europe (or even like Robert E. Howard's prehistoric "Hyborean" Europe) but I slotted the various civilizations into known geography pretty easily.

Our first protagonist is Logen Ninefingers, an infamous barbarian mercenary, former champion of the new King of the North, and leader of a band of badass mofos that were still no match for him. Just why that is gets saved for the very end of the novel—Logen ends up traveling South after being presumed dead, muttering "I'm still alive" all the while, and trying to shake off his bloody reputation.

Next in the POV narration is Inquisitor Sand dan Glokta, formerly a strapping colonel in the Union army and a champion swordsman, who got captured by the Gurkhish and mutilated in their torture pits. His narrative refrain is an introspective Why do I do this? Why would anyone do this? even as he demonstrates a real talent for inquisiting and torturing. He's helped by his two masked "practicals," the tongueless albino giant Snow and the basically-a-Cockney-ruffian Severard, as they stumble upon a conspiracy in the Union.

There's Jezal dan Luthar, a foppish captain in the King's Own officer corps, high-born and a complete dick to everyone. He's also rather thick-headed. Then he meets his friend Major West's seductive and inebriated sister Ardee (not Addie!) and falls in love. Oops! But maybe falling in love with a peasant girl from the sticks will teach him about class inequality...?

The POV also touches on a few other characters, like "the Dog-man," one of Logan's band as they go on without him; and Farrow, an escaped slave of the Gurkhish emperor and one of the last of her people, who is ridiculously hell-bent on revenge and amusingly racist towards the "pinks" of the Union.

Tying them all together are the machinations of Byaz, First of the Magi, sort of like Gandalf except not half as morally upright. There's some tantalizing backstory about who exactly the Magi are and what Byaz is trying to do with the Union, and I hope the rest of the trilogy explores what I saw as an opening to the magical version of the Prime Directive.

What does Abercrombie do with this story that I particularly like? First of all, he plays with (often subverting) some typical tropes of "high" fantasy. Yes, his world seems brutal, but there's an air of humor to it, even if it's sometimes bleak humor. In that respect I sort of imagined everyone as being Warhammer Fantasy characters. Compare to George R. R. Martin's way of playing with tropes, which is to say he brutalizes them and takes them to their horrible logical endpoints. 

Abercrombie manages a subtler feat—he undermines the "good triumphs over evil" inevitability of high fantasy, gives his world some pretty high stakes from a multitude of threats, and yet you don't expect him to start offing POV characters on a whim. Moreover his POV characters are not really "good" people; they're variously stupid, petty, self-loathing, melancholy, prejudiced, and so on, but you do find yourself caring about them. Yeah, even Jezal, hard as it was to come to that. I'm in agreement with the other fans that Glokta is the best character so far though.

I also really appreciated the nice touch of subtle narrative differences between POV characters. For example, Logen's narrative uses short, choppy sentences or fragments, and some frequent sayings, like "Say one thing for X; say (he/she/it) Y" and "And no mistake." Glokta's narrative has by far the most inner musings, as he is by far the most introspective character. Jezal's narrative is obviously colored by his class prejudice. Pacey, the Audible narrator, does a fine job of reading all this seamlessly, so that it's probably possible to tell whose POV it is just by his inflection. He also gives Glokta a weird lisping voice that isn't indicated by the text: the Gurkhish removed alternating teeth so he can't chew food, or pronounce things properly.

Finally, I always appreciate a story that includes magic as a mysterious, otherworldly, and terrifying force. Abercrombie's story is no exception. Plus, magic comes from the "Other Side," where demons live... very Warhammer-y.

If I could find one criticism, I'd probably have to go with the common one among reviewers. The First Law, being the first book of a trilogy, feels rather slow and meandering. The character arcs mostly converge on three plot points: A quest to the ends of the earth led by Byaz, the Gurkhish undermining the Union's hold on an eastern city-state, and the Union's brewing war with the Northmen. It's essentially all setup for the next book... but that said it's a pretty cracking story for all that.

Highly recommended for fans of pulpy low fantasy, as well as "new fantasy" authors like Patrick Rothfuss (The Name of the Wind) or older "grim fantasy" writers like George R. R. Martin, Glen Cook (Chronicles of the Black Company), or (for a more epic feel) Michael Moorcock.

The categorical imperative for social media

At the beginning of summer I decided to detach myself from Facebook—no longer would I 'like' or comment on public posts, though I did keep up a bit with certain closed groups for stuff I was doing over the summer. And yet, since most of my friends are still posting on Facebook, I still look at it. This detached observation lends itself to philosophizing, as I looked at the nature of posts in my news feed and wondered: Is there anyone working on an ethics of digital public conduct?


Certainly the ethics of physical public conduct go all the way back to the foundations of philosophy, all over the world: what constitutes the good life, what duties do we owe to others, and so on. And seeing what gets posted, from a meta level, sparked my curiosity in this Facebook IM discussion with a friend(my comments in bold):
It seems like the mocking impulse comes from a similar place to the cult mocking impulse

Like, these people think they've found a deep answer and the conventional wisdom is wrong, they look somewhat silly doing it, it's all they'll talk about for a while

Any gains are likely obtainable through conventional means (like scientology's personal organizational techniques)

My reaction to seeing stuff like "the keto diet is great!! omg" is roughly the same as "im truly #blessed!!"

Mild exasperation? :P

Like, it's cool you're doing well, but it's still a little cringey if you apparently don't have a very good understanding of the actual causal process, and apparently think you're doing well just from your membership in a group

And conversely, it's similarly cringey if you post a photo of gluten-free facial cream with the caption "lol rubes" and apparently think you're doing well just from your non-membership in a group you're actually confused about

Yeah exactly

The impulse towards pure lifestyle signalling is already strange to me

Ha, it's like a social media categorical imperative: post things that treat your friends as ends in themselves, rather than means to an end (i.e., likes, favs, RTs)
Before I flesh out that last notion, allow me to review the literature.


The Stanford Encyclopedia of Philosophy has a page on the ethics of social networking, because it's the SEP and has a page for everything. But most of the philosophical work so far seems to be focused on social networking and alienation, or how social networking will affect democratic processes, or (closer to my interest here) issues of authenticity:
The messy collision of my family, friends and coworkers on Facebook can be managed with various tools offered by the site, allowing me to direct posts only to specific sub-networks that I define. But the far simpler and less time-consuming strategy is to come to terms with the collision—allowing each network member to get a glimpse of who I am to others, while at the same time asking myself whether these expanded presentations project a person that is more multidimensional and interesting, or one that is manifestly insincere.
As for my specific concern about personal conduct in digital space, and by extension what demands digital interaction places on physical behavior, there seem to be far more questions than answers:
Edward Spence (2011) further suggests that to adequately address the significance of SNS [social networking services] and related information and communication technologies for the good life, we must also expand the scope of philosophical inquiry beyond its present concern with narrowly interpersonal ethics to the more universal ethical question of prudential wisdom. Do SNS and related technologies help us to cultivate the broader intellectual virtue of knowing what it is to live well, and how to best pursue it? Or do they tend to impede its development?
There's a growing notion that something is off with digital society, with a seemingly inexhaustible focus on vapid clickbait and shares-over-content. Alain de Botton, a Swiss philosopher who has some pretty dumb ideas on atheism (*cough* giant black temple *cough*) and religion (*cough* they got education right *cough*), nevertheless seems pretty on point in this Washington Post interview where he says:
We need relief from the Twitter-fueled impression that we are living in an age of unparalleled importance, with our wars, our debts, our riots, our missing children, our after-premiere parties, our IPOs and our rogue missiles. We need, on occasion, to be able to go to a quieter place, where that particular conference and this particular epidemic, that new phone and this shocking wildfire, will lose a little of their power to affect us – and where even the most intractable problems will seem to dissolve against a backdrop of the stars above us.
Chez Pazienza has been trying to stop Jimmy Fallon's implacable Buzzfeed-ification of network TV for several years now:
The key to Fallon’s overwhelming success — that working formula — is viral transmission. The Tonight Show has basically become a living, breathing Upworthy, with Fallon carefully and manipulatively crafting at least one moment each night guaranteed to be shared all over social media the next day.


In other words, if you were born at any point between, say, 1950 and 1980, there was something on The Tonight Show this week you could point to and say, presumably with a smile, “Hey, I’m aware of that thing’s existence.”
All of Fallon's viral bits trade heavily on nostalgia but aren't particularly enlightening or transformative. And he's not alone; just recently SkepChick ran an article about the "I Fucking Love Science" Facebook page, and how it has veered away from fact-checking in favor of clickbait:
But [Elise] Andrew [the owner of IFLS] fails to grasp what both credentialed scientists and science enthusiasts alike know:  Fervor doesn’t necessarily make good science communication. Conveying scientific findings accurately does. While passion is great, it’s just icing on the cake. Let the recent criticism help IFLS reclaim the real science that once fueled its content and commitment. Andrew has done great things with IFLS, but she could be teaching her vast audience about the power of the scientific method, of accuracy, and of science’s most raw purpose:  to perpetually seek the objective truth.
(Note that while Rebecca Watson jumps in—"Upon being made aware of this article, I reached out to Elise Andrew for a response. (In full disclosure, I consider Elise to be a friend). Here is her point-by-point rebuttal to several facets of this piece"—Andrew's "rebuttal" doesn't actually address the substantive claim of fact-free reporting. Oops.)

Then there's this Gawker piece about the Christian vlog community:
The Shaytards are a Mormon family of seven from Idaho who post daily vlogs about their suburban activities with vague, exclamatory titles like “CHEERLEADING MOM!”, “BRIBING CHILDREN”, and “YOU BETTER STOP THAT!” Shay estimates their channel brings in $771,000 per year.

In an interview with Variety last month, Shay described the family’s “content strategy” thusly:
I believe intrinsically family is our greatest source of happiness. My wife is prettier than most moms, and I’m probably funnier than most dads—that helps—but ultimately, it’s the family. What viewers really want to see is my wife and kids together. We get happiness from families, because people need that hope.
It’s a strategy that Sam and Nia and hundreds of other Christian vloggers are desperately trying to mimic, right down to video title construction. (Some recent Sam and Nia hits: “OUR HOUSE JUST GOT FUN!”, “SPICING UP OUR MARRIAGE!”, and “SISSY GOT HURT!”)
The irony that Gawker, a house built whole-cloth out of clickbait, would run this story is not lost on me.

This isn't a new trend, obviously. Wired ran an article in 2013 about the rise of 'racy' headlines among online media aggregators:
In its revived form, the headline is finding relevance far beyond news media as it becomes a key weapon in fields like politics and business. No longer the exclusive province of copy editors, it is now the cornerstone of emailed political appeals, the fulcrum of crowdsourcing capital on Kickstarter, and arguably the basis of an entire communications medium, the all-headlines microblogging system Twitter.
There's even a similar phenomenon emerging in conservative politics, as the 2016 Presidential primary campaign for the Republican Party seems stuffed to the gills with self-promoting self-branding grifters rather than serious people who want to, you know, govern. As Gin & Tacos so eloquently puts it:
The system has become so ridiculous and the idea of personal branding through social media has become so pervasive and potentially lucrative that it is now impossible to determine who is running for president and who is running to build an empire of dolts easily parted from their cash. It is relatively easy to spot a career narcissist like Trump and wonder aloud if he is dumb enough to think he can win or if this is all an elaborate publicity stunt. That skepticism needs to be applied more broadly, though. 90% of the people running are suspect on that criteria.
And a cursory glance at online news suggests that for some people, it's easier to throw up a hashtag campaign and garner national attention (that is, marshal Internet outrage on your behalf) than to deal with the situation directly.


So that's the state of the field. Sharing-for-sharing's-sake is a very seductive strategy for organizations, brands (er, #brands), and individuals (er, #brands). Philosophy, or at least philosophy-as-seen-in-the-Stanford-Encyclopedia, hasn't caught up yet. To which I humbly advance my own categorical imperative for social media:
Post content that treats the recipients as ends, and not merely as means to an end.
I call this the "categorical imperative" because I'm borrowing the language of Kant's Categorical Imperative (second formulation), which goes like this:
Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.
Now, I'm no Kant expert, not even a Kant amateur, and pretty much not generally Kantian—I think that I would probably tell the Nazi that there are no refugees in my house, because "fuck Nazis" is a more universalizable maxim than "never lie," thank you very much—but this formulation always seemed like a decent enough maxim. It seems better, in general, to treat other humans as fully-actualized individuals and moral agents... and to expand that notion by degrees, to include other beings with sentience and rich emotional lives, but that's another post.

With regards to social media, I can make my categorical imperative a bit more blunt: please stop thinking about your online behavior in terms of "brand engagement." Avoid it like the fucking plague. Someone might ask: why? And let me count the ways...

Wikipedia tells us that Brand engagement is the process of forming an emotional or rational attachment between a person and a brand. Note the distinction. A commercial brand like Target or Old Country Buffet is not self-actualizing; it has no personality or executive function beyond one imagined by its followers—which they piece together from content, ah, imagineered by employees of the company. It's strictly metaphor: these entities are agents only in the legal sense, the legal fictive sense, as a concession to the courtroom and the tax code. So when people say things like "Coca-Cola has a friendly personality," it's a delusion—perhaps a conscious one, or a happy one, but it's a delusion.

A personal #brand is not equivalent. Individuals are just that: individual. Where corporate brands fail to reduce to a single physical presence, people do, in some sense. And while personality cannot but be socially constructed—at least in part—it would be silly to refuse to point to an individual to whom a given personality belongs. Personality is inherent to individuals: that's why corporate branding has to create the illusion of an individual agent called "McDonald's" or whatever.

So what's the point of personal branding? Well, I can think of a few reasons, in descending (partial) order of seeming legitimacy:
  • You actually "are" a business—freelance, contractor, etc. So you actually do need to curate a sort of "business" personality for potential clients or employers or customers. This is a funny one because it's both personal and commercial branding. The classic example is an author using a pen name, or an actor using a stage name.
  • Offline life is sufficiently hostile to certain aspects of your personality that you can basically only express those aspects online. Hostility can mean anything from "I'm an angsty teenager so everything seems hostile and I need to try out new identities and find myself" to "I will literally be abused or injured or killed for expressing this offline." This isn't exactly branding as much as refuge in pseudonymity but it sort of counts, and is important. I don't mean to come across as "Your online and offline lives must match exactly," that's stupid.
  • You have a rather generic name and want your content to be found on search engines. Hey, that's me! I don't actually do commercial business online but "Stephen Peterson" is common enough that I needed to call my online identity something different. And thus StephenMeansMe (because it does!) was born: my online "brand," which I manage in the sense that sometimes I don't post stuff online.
  • Micromanagement of social interaction, or "playing to type," or "being gimmicky." That's in order of how transparently obvious the strategy seems. This used to be reasonably common in forums where there wasn't the fiction of names and profiles matching "real" people. The classic pathological example is the "teh penguin of d00m" copypasta. Or maybe boxxy. Or probably the entirety of "Vine celebrities." *gag*
  • Narcissism. I mean actual psychopathology here, not just in the colloquial sense of merely being a self-absorbed idiot.
I don't claim for this list to be exhaustive; nor do I claim that each step in the ladder represents an equal negative ethical gradient. The last couple steps, in my opinion, are far worse than the first couple. And for what it's worth, I would probably rate my own behavior as slightly ethically negative: however much I sort of have to do some personal brand management because of how search engines work, I still think it's faintly stupid.

    Can a person do some personal brand management and not violate my categorical imperative for social media? Post content that treats your contacts as ends, and not merely as means to an end. Note: Not "merely" as means to an end. It'd be really hard to treat other people strictly as ends, unless one were uniquely virtuous and also completely lacking all human needs.

    An example of tech-lifestyle-thinkpieces getting it wrong: Stop saying technology is causing social isolation. It's a Medium post so a self-indulgent tone is just part of the package deal (just be glad it's not Thought Catalog), but I think it makes some important, if dubious, points:
    Part of the commentaries I’ve seen criticizing this whole issue also touch on social media, since it is so integrated with our usual smartphone usage. For example, I see lots of people complaining about people who take pictures of the food they order at restaurants for posting them on Instagram or wherever. I don’t get that. Is it wrong to create a permanent memento of an otherwise temporary experience, to capture in a photo the work of the people back in the kitchen who made an effort to make the food look attractive? To me, those complaints allude to a lack of understanding of how modern social media works.
    Yet this rebuttal seems to also lack an understanding of how modern social media works. It's simply naive to think that "permanent memento" is even remotely near the feature priorities of social networking services. They're not for storage, they're for sharing. For engagement. And that's sort of opposed to permanence: corporate brands cannot stay static if they want to remain "relevant" and continue to occupy consumer mind-share.

    The article quotes a Tumblr user:
    I’m happy seeing my friends take photos of their food. I like taking photos of my food. Because there is a chef in the back of the kitchen who works hard to plate things beautifully and in any other situation, people dive in immediately and ruin that image. We take photos to preserve that image and who the fuck knows, if I was the chef I would be digging through instagram hoping to see my plate on there. We’re celebrating someones hard work, work that is generally temporary.
    Because not only does one need to compliment the chef, one needs to demonstrate to their followers that they've complimented the chef. Merely writing a note of praise on the receipt, or saying something out loud, or tipping well—these will not do, because these are, if not private, then unshareable. By the way, I invite probability estimates for the case that the Tumblr user was unware how desperate and FOMO-anxious "if I was the chef I would be digging through Instagram hoping to see my plate on there" sounds.

    Of course, because Tumblr, the user concludes:
    Technology isn’t bad. You’re just upset with yourselves for having a lack of self-control. You hate that people connect through technology. And maybe, you just don’t like seeing people love themselves, enjoy life, and feel joy. That’s your problem, not technology’s.
    Right. To draw a far-too-extreme and hackneyed analogy, it's not that people "just don't like seeing people love themselves, enjoy life, and feel joy" high out of their minds on party drugs. One should still question the ethics, meaning, ideology, in short the philosophy of that sort of behavior.

    The Medium author concludes in a spectacular self-important fashion:
    So, stop feeling superior for making fun of other people because they’re using their smartphones, stop pretending our lives and society would be better without them, stop blaming technology for natural human behaviors. If you see an image like the ones I referenced here, the ones trying to show how we are “letting technology ruin social interactions and pleasant experiences”, stop and reflect on why people are actually using their electronic devices. Furthermore, stop romanticizing the past, believing life was better without all of this ubiquitous technology like in some form of “neo-ludism”. Consumer technology is good. It enables us to connect in amazing ways as humans. It is not replacing real interaction. It is augmenting it. Embrace it.

    Should you be surprised that the author is "a Multimedia Engineer focused on human-computer interaction at the intersection of art, tech, science and culture"?

    Moreover, his is a weak-man sort of argument: yes, there are people (mostly older) who scoff and harrumph about Those Darn Millennials, or more generally Kids These Days, and how "technology" is making us lonely or whatever. I've written against that sentiment before and I stand by it.

    But I'm not saying all technology is bad. I am saying that, rather than stopping at "but my social network is so big now!" we should be evaluating the quality and nature of those interactions. 140 character limits (Twitter) or a user interface dedicated to rapid resharing (Tumblr) can't have zero effect on discourse.

    I quit Facebook but I didn't stop doing what I used to do there: I still post links, just in blog form. I still talk to my friends on Messenger. But I thought about the mindset that Facebook engenders and I just don't like it.

    In other words, empirical facts cannot stop a conversation about values.

    Wednesday Links -- 4 November 2015

    It's a special "schools and sex edition" of Wednesday Links! Schools are hard on introverts, except that students are generally becoming fragile bundles of nerves. The Ivy League schools suck because everyone there is a tool, while kindergarten in Finland doesn't suck because the kids play instead of read. On the sexual side of things, it's all about sex negativity and why that won't help prevent abortions or prostitution. Then again, it might help you stay a kiss-virgin until your wedding day.

    Michael Godsey at The Atlantic writes about when schools overlook introverts. I'm not sure how I feel about one strategy over another, inasmuch as I'm fairly middling between introversion and extroversion. That said, this seems like a thing that "free market" schools would respond very slowly too, simply because (as you can infer from the university anecdotes Godsey provides) extroverted students "do" more, and are therefore more amenable to extracurricular amenities. Introverts, presumably, wouldn't have that many more "hooks" for college marketing, other than that "classes are good, library has quiet spaces." At least to the mind of college marketing departments, because marketers are not known for their introversion.

    According to Psychology Today, universities are facing a big problem: Declining student resilience. The increased demand for "trigger warnings" (but caveat: that's mostly a private-school phenomenon) is just one facet of the changing collective student psychology. Sometimes it's just an amplification of a perennial student attitude:
    Faculty also noted an increased tendency for students to blame them (the faculty) for low grades—they weren’t explicit enough in telling the students just what the test would cover or just what would distinguish a good paper from a bad one. They described an increased tendency to see a poor grade as reason to complain rather than as reason to study more, or more effectively.
    See also this post (and many others) at Gin & Tacos. In particular, this G&T post ("Can this 900-lb gorilla pay tuition?") sort of unites the Psychology Today and Atlantic pieces:
    The second part is the one people only whisper about. More and more students are going to college over the past two decades, partly driven by the availability of loans and the inability to enter most fields without a degree. The end result is that moreso than any time in the past, today there are huge numbers of students flocking to college who have zero ability to succeed there. Universities of course want to retain these students, and in order to do so they have to create a massive bureaucracy of support services. Any skill tangentially related to completing college level work now has a lavishly staffed support center devoted to it on campus. A writing center, a study skills program, tutoring services, a math helpdesk, a massive bureaucracy devoted to the shockingly large share of students diagnosed with various disabilities, and anything else you can imagine.

    If you want to stay open, you have to admit a certain number of students. In an ideal world you accept only students who can succeed given the nature of the school. In reality you end up taking a lot who probably can't. And if you accept students who do not know how to write sentences in English, you better have someone ready to hold their hand if you expect them to last longer than a semester. That costs money – a lot of money.

    When you add up the cost of huge salaries for presidents, provosts, deans, and deanlets, recreational facilities that resemble theme parks, athletic programs (a competitive D-I football program costs a small fortune), shiny new buildings, and an army of functionaries tasked with guiding students who sometimes lack even high school level academic skills through college coursework, it makes sense why costs are exploding. Those of you who went to college in the ancient past can attest to how austere the accommodations were, how barebones the support services were, and how little "fun" universities paid to provide.
    With a bleakness that is vintage Gin & Tacos, the hypothesis becomes clear. Cash-thirsty universities are prone to great expenses on "fun," because they have to keep the students entertained or else the students will leave—from exhaustion or the reality of their own unreadiness, to say nothing of graduating early—and there goes the money train.

    Take it from this VICE correspondent: Going to an Ivy League school sucks. Not to knock my awesome lead instructor at Johns Hopkins Center for Talented Youth or the CTY summer program in general, but I kinda got a sense of that there. There's just so much more snobbishness and stupid social traps and nakedly obvious social climbing on the East Coast: middle schoolers with any glimmer of prospect have to think about which high school to apply for, on their way to (because it's just the default) which Ivy they attend. Personally I think social segregation should be kept to an absolute minimum before kids turn 18, and I'm a fan of broad-base liberal education, so I'm already not a fan of "magnet" schools. But jeez:
    On average, the Ivies accept about 8 percent of applicants. I still remember the shock I felt when I got into Columbia. In my freshman orientation program, we had a discussion about what it meant to attend an Ivy. The greatest artists, politicians, scientists, and entrepreneurs had walked through these halls. "We're the elite," one girl said. "We're not like other people." She pointed to the passersby outside, visible through the barred windows. "We're going to change the world."
    I don't even come from an underprivileged background like the author does, and I would probably have had a mental breakdown by week 3 just from holding back a series of "GO FUCK YOURSELVES" at people like this. Sure, these schools have the best academic programs, but if the cost is tens of thousands of dollars and imbedding with a bunch of trust-fund no-sense-of-noblesse-oblige pricks?

    State Supreme Court: Charter schools are unconstitutional. Apparently the Washington State Constitution, in addition to guaranteeing public funding of education, forbids public money from going to schools that aren't under the auspices of elected school boards. On the one hand, this probably protects us from bait-and-switch school voucher schemes that funnel taxpayer money to nonsense-based religious schools (the Loch Ness monster proves that dinosaurs didn't go extinct and evolution is a lie). On the other hand, the requirement does sort of pre-empt the best argument for charter schools: that the school can experiment with different practices and strategies without needing approval or homogenization from a school board. But charter schools are such a mixed bag that some refinement is needed; perhaps the State SC decision will jump-start that.

    The Joyful, Illiterate Kindergarteners of Finland. Putting aside the intentionally-provocative headline—Web title: "Why Kindergarten in Finland Is All About Playtime (and Why That Could Be More Stimulating Than the Common Core)," because clickbait—and the usual fawning, mystical exuberance for the Finnish educational system, it's a pretty interesting piece. Okay but fill in your own caveats. Mine are a bit general: usually American heavy-sigh reporting on Finnish education takes a tone of uncuriousity, like "isn't it amazing that the Finns can do that and we can't, for some reason?" The reasons seem pretty straightforward: (1) treat teaching like a true profession, and (2) give teachers more control over classes while simultaneously demanding more rigor and selectivity in the credentialing process. (Oh, and (3) if you give students standardized tests, damn well make the tests matter.) One might expect that double-Masters-holding, years-of-practicum-experienced teachers would understand how to give young students an education without being rigidly didactic—as I'm sure district administrators demand but Common Core probably doesn't require as stringently, i.e. the typical case for badmouthing a set of guidelines.

    If "pro-lifers" wanted to end abortion — rather than control sex — their tactics would be radically different. Dr. Valerie Tarico was one of the "con" panelists at the God Debate that I co-hosted at WWU last year, and is a persuasive speaker as well as an accomplished research psychologist. This isn't the first "anti-abortion activists are doing it all wrong" piece to float around the Internet, but Dr. Tarico makes a pretty humanistic case for her policy suggestions that others may not. That said, it's not clear to what extent this will change anyone's mind: if anti-abortionists really are coming from a place of sex-negativity, no amount of "but this thing, even though it's agnostic about sexual promiscuity, pretty definitely reduces abortion rates!" will persuade them that it's really promoting harlotry and strumpetude. (Because "strumpet" is criminally under-utilized.)

    On my to-watch list: "Give Me Sex Jesus" reviewed: A documentary about Christian purity culture

    Chris Hallquist notes similarities between anti-gay and anti-sex-work arguments:
    Parallel problems exist in studies on sex work. As sociologist Ronald Weitzer explains, claims about all sex workers will be made based on studies of street-based sex workers, or sex workers in jail, or sex workers who reached out to service organizations for help.

    In both cases (anti-gay and anti-sex work), writers pushing an ideological agenda will ignore explicit warnings in the studies themselves.
    Which makes sense if both standpoints come from an ideology of sex-negativity. Now, I don't think one has some sort of moral obligation to be "sex positive" in the "celebrate literally every sexual choice anyone makes as long as it's consensual" sense, just that people probably shouldn't put sexual activity in the negative-infinity pit of their moral landscape. (This is in terms of, like, moral worldview; one can be 100% sex-averse personally, that's fine.)

    Book ends

    Amazon just built its own physical bookstore in the University Distric of Seattle. What a twist! So why do I think it's pointless for Amazon, and indeed any "corporate" bookseller, to operate a physical storefront? And why do I think that the only worthwhile brick-and-mortar bookstores are the "anything and everything" used bookstores? Simply put: serendipity versus suggestion. You may think that my ideas about the philosophy of bookstores are unorthodox, but if you agree, you might also like...

    Recently New York Times Magazine ran a story about the fascinating gutter-market for ultra-cheap used books sold on You know, all those books that are priced at $0.01 (+$3.99 shipping)? Turns out they're often rescued from the landfill—sometimes literally. Sort of like donations to a food bank, libraries and thrift stores often get far more donations than they can properly deal with:
    [Thriftbooks], along with several other enormous used-book-selling operations that have popped up online in the past decade, is literally buying garbage. Thrift stores like Goodwill receive many more donations than they can physically accommodate. Employees rifle through donations, pick out the stuff that is most likely to sell and send the rest to a landfill. The same thing happens at public libraries; they can take only as many donations as their space and storage will allow, so eventually they have to dispose of books, too.
    While grabbing books out of dumpsters and selling the somewhat-worthwhile ones on Amazon may sound a bit odd, the owner of Thriftbooks makes a good point: “10 years ago, before companies like mine existed, those books were seen as having no value at all.” And that's fair; moreover, it's almost certainly the case that most printed books haven't survived transitions to new printings or digitization, as is true of motion pictures:
    But with each new iteration of the home viewing experience, the volume of available titles decreases. All of the movies available on celluloid never made it to VHS. All of the movies available on VHS never made it to DVD (40-45% never crossed over, according to estimates). And not all of the movies available on DVD are streaming — it’s not even close.
    So I think this is, overall, a net good for readers of physical books: thanks to these gutter-book operations, it's a lot easier to find randomly specific titles (in the NYTMag author's case, trashy mid-century crime thrillers) that you could never get at a Barnes & Noble or local bookstore.

    More recently, Amazon opened up an honest-to-Bezos, brick-and-mortar bookstore near the University of Washington. According to the Seattle Times article:
    Amazon is betting that the troves of data it generates from shopping patterns on its website will give it advantages in its retail location that other bookstores can’t match. It will use data to pick titles that will most appeal to Seattle shoppers.

    And that could also solve the business problem that has long plagued other bookstores: unsold books that gather dust on shelves and get sent back to publishers. More than most book retailers, Amazon has deep insight into customer buying habits and can stock its store with titles most likely to move.
    What are brick-and-mortar bookstores good for? Well, first of all, in my highly subjective opinion, we first have to distinguish between types of bookstores.

    Most obviously, there are corporate bookstores: Barnes & Noble, and now Amazon Bookstore I guess. These are stupid. Their strategy, as described in the Seattle Times article, is to guess at what you want to buy before you buy it, and therefore try to only stock those books with the greatest chance of being sold. Nothing wrong with that, except we already have a far better service for that, and it's called Amazon Dot Com. If I know what book I want, it's probably already in my Amazon wishlist. And with 2-day Prime shipping, well, I can be that patient at least.

    The absolute best physical bookstores, I think, are "true" used bookstores. You know, the kind that seem to have more books than shelf space, sometimes literally—for example, Eclipse Books in Fairhaven, WA has stacks of books on the floor, because its many shelves are already full.

    Why the scare quotes around "true"? Eh, well, here's the big subjectivity: I don't think all locally-owned bookstores, even ones that call themselves "new & used books," are "true" used bookstores. Often they're more like showcases for book lovers to feel good about themselves, and drink chai lattes together. You can tell when the employee recommended books are far too obvious (even when I agree with the reviews!), and when the store has far too much open space. True used bookstores are barely curated. Corporate-wannabe bookstores are overly curated.

    I make this distinction because I think the appeal and the use of brick-and-mortar bookstores is the serendipity of finding random books. I wouldn't have picked up a book on the legal theory of censorship laws, or the existentialist critique of Freudian psychology, or a sociological analysis of role-playing gaming groups... except that I found them at used bookstores, or their temporary equivalent, library book sales.

    Public libraries are sort of a middle ground: the true use of a library is to help you find books to fit your (already known) interest, in a more semantic, nuanced way than just buying habits. "I'm looking for books about the economics of the world's space programs since 1970" is the sort of highly-specific query that Amazon flubs at, and for which you'd be hard pressed to stumble upon a result at a used bookstore. But giving those sorts of answers is just part of a librarian's job description.

    By comparison, even a bookstore like Powell's, spectacular as it is, really is more about the spectacle of books rather than actually just dumping a pile of random books in a single space for customers to freely forage around in.

    To summarize: Corporate bookstores try a sort of guided-allocation model of bookselling. They buy the books they think "you" (that is, the aggregate book-buying individual, the average customer) want to buy. And so people do, they buy the "Bestsellers" in a very chicken-and-egg phenomenon; but other than that you get the physical book right away, why go anywhere when you can just go online and get the book in two days?

    True used bookstores, by contrast, try a market-saturation model. They buy enough different kinds of books that any random customer has a pretty good chance of buying something. The assumption is not that everyone wants the same book (although of course popularity affects every bookseller's supply decisions), but that everyone will want some book. The whole ethos of serendipity is diametrically opposed to Amazon's ethos of cultivation and curation. Sure, their recommender engine might offer up what seem like serendipitous suggestions, but they're actually predictions based on your buying habits and the buying habits of like-minded individuals. It's just the corporate bookselling model in real time.

    So that's why I won't buy anything from the physical Amazon bookstore—if I ever go there.