Jon Steward on Trevor Noah’s What Now Podcast

No one has discernment for what they aren’t. […] You can’t. It’s the hardest thing in the world. It’s hard enough to have empathy to what they aren’t let alone discernment. […]
Jon Steward at 50:30

If we were more understanding of prejudice and stereotype and less tolerant of racism we’d understand that prejudice and stereotype are functions mostly of ignorance and of experience. Racism is malevolent, right? But the other is way more natural, but we react as though it would metastasize immediately. And so I think we throw out barriers to each other […] before we have to.
Jon Steward at 56:00

Century-Scale Storage

What would you use to keep (digital) data safe for at least a hundred years? Maxwell Neely-Cohen looks at all the factors, possible technologies, social and economic challenges that you have to contend with if you intentionally want to store data for a century. He explicitly chose that time scale, because it is at the edge of what a human can experience, but it is outside of a single human’s work life as well as beyond the lifetime of most companies or institutions. So the premise sets you up for a host of problems to be solved. He also analyses strategies for recording and keeping data past and present and evaluates their potential for keeping data safe at century-scale.
It’s long, but worth it.

We’ll Ask The AI How to Make Money

We have no current plans to make revenue.

We have no idea how we may one day generate revenue.

We have made a soft promise to investors that once we’ve built a general intelligence system, basically we will ask it to figure out a way to generate an investment return for you.

Sam Altman to VCs in 2024

A video of this memorable moment … you can’t make this up.

We Don’t Want “Privacy”-“Enhancing” Technologies in Our Browsers

The current trend for privacy-enhancing technologies for surveillance in web browsers are going to be remembered as a technical dead end, an artifact of an unsustainable advertising oligopoly.

Don Martin has 10 succinct points on why users (aka we) don’t actually want so-called Privacy Enhancing Technologies (PET) … some technical, some social, some economic.

Best “AI”-Rant

Most organizations cannot ship the most basic applications imaginable with any consistency, and you’re out here saying that the best way to remain competitive is to roll out experimental technology that is an order of magnitude more sophisticated than anything else your I.T department runs, which you have no experience hiring for, when the organization has never used a GPU for anything other than junior engineers playing video games with their camera off during standup, and even if you do that all right there is a chance that the problem is simply unsolvable due to the characteristics of your data and business? This isn’t a recipe for disaster, it’s a cookbook for someone looking to prepare a twelve course fucking catastrophe.

How about you remain competitive by fixing your shit? I’ve met a lead data scientist with access to hundreds of thousands of sensitive customer records who is allowed to keep their password in a text file on their desktop, and you’re worried that customers are best served by using AI to improve security through some mechanism that you haven’t even come up with yet? You sound like an asshole and I’m going to kick you in the jaw until, to the relief of everyone, a doctor will have to wire it shut, giving us ten seconds of blessed silence where we can solve actual problems.

After some general ranting the author answers several common “reasons” why a company might want to use LLMs/AI tools.