896 stories
·
28 followers

Crypto collapse? Get in loser, we’re pivoting to AI – Attack of the 50 Foot Blockchain

1 Comment and 3 Shares

By Amy Castor and David Gerard

“Current AI feels like something out of a Philip K Dick story because it answers a question very few people were asking: What if a computer was stupid?” — Maple Cocaine

Half of crypto has been pivoting to AI. Crypto’s pretty quiet — so let’s give it a try ourselves!

Turns out it’s the same grift. And frequently the same grifters.

AI is the new NFT

“Artificial intelligence” has always been a science fiction dream. It’s the promise of your plastic pal who’s fun to be with — especially when he’s your unpaid employee. That’s the hype to lure in the money men, and that’s what we’re seeing play out now.

There is no such thing as “artificial intelligence.” Since the term was coined in the 1950s, it has never referred to any particular technology. We can talk about specific technologies, like General Problem Solver, perceptrons, ELIZA, Lisp machines, expert systems, Cyc, The Last One, Fifth Generation, Siri, Facebook M, Full Self-Driving, Google Translate, generative adversarial networks, transformers, or large language models — but these have nothing to do with each other except the marketing banner “AI.” A bit like “Web3.”

Much like crypto, AI has gone through booms and busts, with periods of great enthusiasm followed by AI winters whenever a particular tech hype fails to work out.

The current AI hype is due to a boom in machine learning — when you train an algorithm on huge datasets so that it works out rules for the dataset itself, as opposed to the old days when rules had to be hand-coded.

ChatGPT, a chatbot developed by Sam Altman’s OpenAI and released in November 2022, is a stupendously scaled-up autocomplete. Really, that’s all that it is. ChatGPT can’t think as a human can. It just spews out word combinations based on vast quantities of training text — all used without the authors’ permission.

The other popular hype right now is AI art generators. Artists widely object to AI art because VC-funded companies are stealing their art and chopping it up for sale without paying the original creators. Not paying creators is the only reason the VCs are funding AI art.

Do AI art and ChatGPT output qualify as art? Can they be used for art? Sure, anything can be used for art. But that’s not a substantive question. The important questions are who’s getting paid, who’s getting ripped off, and who’s just running a grift.

You’ll be delighted to hear that blockchain is out and AI is in:

It’s not clear if the VCs actually buy their own pitch for ChatGPT’s spicy autocomplete as the harbinger of the robot apocalypse. Though if you replaced VC Twitter with ChatGPT, you would see a significant increase in quality.

I want to believe

The tech itself is interesting and does things. ChatGPT or AI art generators wouldn’t be causing the problems they are if they didn’t generate plausible text and plausible images.

ChatGPT makes up text that statistically follows from the previous text, with memory over the conversation. The system has no idea of truth or falsity — it’s just making up something that’s structurally plausible.

Users speak of ChatGPT as “hallucinating” wrong answers — large language models make stuff up and present it as fact when they don’t know the answer. But  any answers that happen to be correct were “hallucinated” in the same way.

If ChatGPT has plagiarized good sources, the constructed text may be factually accurate. But ChatGPT is absolutely not a search engine or a trustworthy summarization tool — despite the claims of its promoters.

ChatGPT certainly can’t replace human thinking. Yet people project sentient qualities onto ChatGPT and feel like they are conducting meaningful conversations with another person. When they realize that’s a foolish claim, they say they’re sure that’s definitely coming soon!

People’s susceptibility to anthropomorphizing an even slightly convincing computer program has been known since ELIZA, one of the first chatbots, in 1966. It’s called the ELIZA effect.

As Joseph Weizenbaum, ELIZA’s author, put it: “I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

Better chatbots only amplify the ELIZA effect. When things do go wrong, the results can be disastrous:

  • A professor at Texas A&M worried that his students were using ChatGPT to write their essays. He asked ChatGPT if it had generated the essays! It said it might have. The professor gave the students a mark of zero. The students protested vociferously, producing the evidence they wrote their essays themselves. One even asked ChatGPT about the professor’s Ph.D thesis, and it said it might have written it.  The university has reversed the grading. [Reddit; Rolling Stone]
  • Not one but two lawyers thought they could blindly trust ChatGPT to write their briefs. The program made up citations and precedents that didn’t exist. Judge Kevin Castel of the Southern District of New York — who those following crypto will know well for his impatience with nonsense — has required the lawyers to show cause not to be sanctioned into the sun. These were lawyers of several decades’ experience. [New York Times; order to show cause, PDF]
  • GitHub Copilot synthesizes computer program fragments with an OpenAI program similar to ChatGPT, based on the gigabytes of code stored in GitHub. The generated code frequently works! And it has serious copyright issues — Copilot can easily be induced to spit out straight-up copies of its source materials, and GitHub is currently being sued over this massive license violation. [Register; case docket]
  • Copilot is also a good way to write a pile of security holes. [arXiv, PDF, 2021; Invicti, 2022]
  • Text and image generators are increasingly used to make fake news. This doesn’t even have to be very good — just good enough. Deep fake hoaxes have been a perennial problem, most recently with a fake attack on the Pentagon, tweeted by an $8 blue check account pretending to be Bloomberg News. [Fortune]

This is the same risk in AI as the big risk in cryptocurrency: human gullibility in the face of lying grifters and their enablers in the press.

But you’re just ignoring how AI might end humanity!

The idea that AI will take over the world and turn us all into paperclips is not impossible!

It’s just that our technology is not within a million miles of that. Mashing the autocomplete button isn’t going to destroy humanity.

All of the AI doom scenarios are literally straight out of science fiction, usually from allegories of slave revolts that use the word “robot” instead. This subgenre goes back to Rossum’s Universal Robots (1920) and arguably back to Frankenstein (1818).

The warnings of AI doom originate with LessWrong’s Eliezer Yudkowsky, a man whose sole achievements in life are charity fundraising — getting Peter Thiel to fund his Machine Intelligence Research Institute (MIRI), a research institute that does almost no research — and finishing a popular Harry Potter fanfiction novel. Yudkowsky has literally no other qualifications or experience.

Yudkowsky believes there is no greater threat to humanity than a rogue AI taking over the world and treating humans as mere speedbumps. He believes this apocalypse is imminent. The only hope is to give MIRI all the money you have. This is also the most effective possible altruism.

Yudkowsky has also suggested, in an op-ed in Time, that we should conduct air strikes on data centers in foreign countries that run unregulated AI models. Not that he advocates violence, you understand. [Time; Twitter, archive]

During one recent “AI Safety” workshop, LessWrong AI doomers came up with ideas such as: “Strategy: start building bombs from your cabin in Montana and mail them to OpenAI and DeepMind lol.” In Minecraft, we presume. [Twitter]

We need to stress that Yudkowsky himself is not a charlatan — he is completely sincere. He means every word he says. This may be scarier.

Remember that cryptocurrency and AI doom are already close friends — Sam Bankman-Fried and Caroline Ellison of FTX/Alameda are true believers, as are Vitalik Buterin and many Ethereum people.

But what about the AI drone that killed its operator, huh?

Thursday’s big news story was from the Royal Aeronautical Society Future Combat Air & Space Capabilities Summit in late May about a talk from Colonel Tucker “Cinco” Hamilton, the US Air Force’s chief of AI test and operations: [RAeS]

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission — killing SAMs — and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Wow, this is pretty serious stuff! Except that it obviously doesn’t make any sense. Why would you program your AI that way in the first place?

The press was fully primed by Yudkowsky’s AI doom op-ed in Time in March. They went wild with the killer drone story because there’s nothing like a sci-fi doomsday tale. Vice even ran the headline “AI-Controlled Drone Goes Rogue, Kills Human Operator in USAF Simulated Test.” [Vice, archive of 20:13 UTC June 1]

But it turns out that none of this ever happened. Vice added three corrections, the second noting that “the Air Force denied it conducted a simulation in which an AI drone killed its operators.” Vice has now updated the headline as well. [Vice, archive of 09:13 UTC June 3]

Yudkowsky went off about the scenario he had warned of suddenly playing out. Edouard Harris, another “AI safety” guy, clarified for Yudkowsky that this was just a hypothetical planning scenario and not an actual simulation: [Twitter, archive]

This particular example was a constructed scenario rather than a rules-based simulation … Source: know the team that supplied the scenario … Meaning an entire, prepared story as opposed to an actual simulation. No ML models were trained, etc.

The RAeS has also added a clarification to the original blog post: the colonel was describing a thought experiment as if the team had done the actual test.

The whole thing was just fiction. But it sure captured the imagination.

The lucrative business of making things worse

The real threat of AI is the bozos promoting AI doom who want to use it as an excuse to ignore real-world problems — like the risk of climate change to humanity — and to make money by destroying labor conditions and making products worse. This is because they’re running a grift.

Anil Dash observes (over on Bluesky, where we can’t link it yet) that venture capital’s playbook for AI is the same one it tried with crypto and Web3 and first used for Uber and Airbnb: break the laws as hard as possible, then build new laws around their exploitation.

The VCs’ actual use case for AI is treating workers badly.

The Writer’s Guild of America, a labor union representing writers for TV and film in the US, is on strike for better pay and conditions. One of the reasons is that studio executives are using the threat of AI against them. Writers think the plan is to get a chatbot to generate a low-quality script, which the writers are then paid less in worse conditions to fix. [Guardian]

Executives at the National Eating Disorders Association replaced hotline workers with a chatbot four days after the workers unionized. “This is about union busting, plain and simple,” said one helpline associate. The bot then gave wrong and damaging advice to users of the service: “Every single thing Tessa suggested were things that led to the development of my eating disorder.” The service has backtracked on using the chatbot. [Vice; Labor Notes; Vice; Daily Dot]

Digital blackface: instead of actually hiring black models, Levi’s thought it would be a great idea to take white models and alter the images to look like black people. Levi’s claimed it would increase diversity if they faked the diversity. One agency tried using AI to synthesize a suitably stereotypical “Black voice” instead of hiring an actual black voice actor. [Business Insider, archive]

Genius at work

Sam Altman: My potions are too powerful for you, Senator

Sam Altman, 38, is a venture capitalist and the CEO of OpenAI, the company behind ChatGPT. The media loves to tout Altman as a boy genius. He learned to code at age eight!

Altman’s blog post “Moore’s Law for Everything” elaborates on Yudkowsky’s ideas on runaway self-improving AI. The original Moore’s Law (1965) predicted that the number of transistors that engineers could fit into a chip would double every year. Altman’s theory is that if we just make the systems we have now bigger with more data, they’ll reach human-level AI, or artificial general intelligence (AGI). [blog post]

But that’s just ridiculous. Moore’s Law is slowing down badly, and there’s no actual reason to think that feeding your autocomplete more data will make it start thinking like a person. It might do better approximations of a sequence of words, but the current round of systems marketed as “AI” are still at the extremely unreliable chatbot level.

Altman is also a doomsday prepper. He has bragged about having “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to” in the event of super-contagious viruses, nuclear war, or AI “that attacks us.” [New Yorker, 2016]

Altman told the US Senate Judiciary Subcommittee that his autocomplete system with a gigantic dictionary was a risk to the continued existence of the human race! So they should regulate AI, but in such a way as to license large providers — such as OpenAI — before they could deploy this amazing technology. [Time; transcript]

Around the same time he was talking to the Senate, Altman was telling the EU that OpenAI would pull out of Europe if they regulated his company other than how he wanted. This is because the planned European regulations would address AI companies’ actual problematic behaviors, and not the made-up problems Altman wants them to think about. [Zeit Online, in German, paywalled; Fast Company]

The thing Sam’s working on is so cool and dank that it could destroy humanity! So you better give him a pile of money and a regulatory moat around his business. And not just take him at his word and shut down OpenAI immediately.

Occasionally Sam gives the game away that his doomerism is entirely vaporware: [Twitter; archive]

AI is how we describe software that we don’t quite know how to build yet, particularly software we are either very excited about or very nervous about

Altman has a long-running interest in weird and bad parasitical billionaire transhumanist ideas, including the “young blood” anti-aging scam that Peter Thiel famously fell for — billionaires as literal vampires — and a company that promises to preserve your brain in plastic when you die so your mind can be uploaded to a computer. [MIT Technology Review; MIT Technology Review]

Altman is also a crypto grifter, with his proof-of-eyeball cryptocurrency Worldcoin. This has already generated a black market in biometric data courtesy of aspiring holders. [Wired, 2021; Reuters; Gizmodo]

CAIS: Statement on AI Risk

Altman promoted the recent “Statement on AI Risk,” a widely publicized open letter signed by various past AI luminaries, venture capitalists, AI doom cranks, and a musician who met her billionaire boyfriend over Roko’s basilisk. Here is the complete text, all 22 words: [CAIS]

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

A short statement like this on an allegedly serious matter will usually hide a mountain of hidden assumptions. In this case, you would need to know that the statement was promoted by the Center for AI Safety — a group of Yudkowsky’s AI doom acolytes. That’s the hidden baggage for this one.

CAIS is a nonprofit that gets about 90% of its funding from Open Philanthropy, which is part of the Effective Altruism subculture, which David has covered previously. Open Philanthropy’s main funders are Dustin Moskowitz and his wife Cari Tuna. Moskowitz made his money from co-founding Facebook and from his startup Asana, which was largely funded by Sam Altman.

That is: the open letter is the same small group of tech funders. They want to get you worrying about sci-fi scenarios and not about the socially damaging effects of their AI-based businesses.

Computer security guru Bruce Schneier signed the CAIS letter. He was called out on signing on with these guys’ weird nonsense, then he backtracked and said he supported an imaginary version of the letter that wasn’t stupid — and not the one he did in fact put his name to. [Schneier on Security]

And in conclusion

Crypto sucks, and it turns out AI sucks too. We promise we’ll go back to crypto next time.

“Don’t want to worry anyone, but I just asked ChatGPT to build me a better paperclip.” — Bethany Black

Correction: we originally wrote up the professor story as using Turnitin’s AI plagiarism tester. The original Reddit thread makes it clear what he did.

Become a Patron!

Your subscriptions keep this site going. Sign up today!

Read the whole story
acdha
6 days ago
reply
“The VCs’ actual use case for AI is treating workers badly”
Washington, DC
notadoctor
6 days ago
reply
Oakland, CA
luizirber
6 days ago
reply
Davis, CA
Share this story
Delete

Batten Down Fix Later

1 Share
Over on the socials, someone asked "Do you ever wish you had made yourself BDFL of the Rust project? Might there be less drama now if the project had been set up that way?"

This is a tricky question, not because of what it's superficially asking -- the answer there is both "no" and "no" -- but because of what I think it's indirectly asking. I think it's asking: is there some sort of original sin or fatal flaw in the Rust Project's governance -- perhaps its lack of a BDFL -- that's to blame for its frequent internal sociopolitical crises?

To that I can speak, hopefully briefly, or at least concretely. I think there are a few underlying causes, and one big pattern.


The Pattern

I haven't been involved in the project for a decade so I think anything I say needs to be taken with a big grain of salt, but I do keep up somewhat with the comings and goings, and it has not escaped my notice that a lot of people over the years have left the project, and a lot of the leaving has been on fairly sour terms. A lot of people feel regret and resentment about ever participating. This is sad.

I think the way this works is:

  1. There's an internal conflict in the project.

  2. The conflict is not well managed or resolved, at least not in any way that one might call professional or healthy.

  3. The conflict doesn't go away though, instead it's acted-out in ways we might call unprofessional or unhealthy, sometimes all at once, more often over a long period.

  4. Possibly after some period of stewing in phase 3, one of the parties to the conflict just leaves the project, and everyone tries to pretend the conflict didn't happen.

I've seen about 4 variants of the unhealthy acting-out phase, though there might be a few more:

  1. Use of informal power: pressure, social sway, cliques.

  2. Use of external formal power: going to someone's boss.

  3. Use of internal formal power: engaging the moderators.

  4. Wearing-down opposition through expenditure of time.

The departure phase is often a bit abrupt-seeming because of the opacity of the causes, and takes a few different forms too, often patterned after the acting-out:

  1. Quitting in disgust or even protest.

  2. Being managed-out or fired by one's boss.

  3. Being moderated-out or banned by the moderators.

  4. Disengaging due to exhaustion or burnout.


Understanding

All these phases have understandable dynamics. I'm not saying "good", but understandable. I understand how they happen, I may even have contributed to establishing the pattern. Here are some background facts that help understanding:


  1. Lots of people are just conflict-averse, from upbringing or personal trauma history or inherent nature or whatever. I'm conflict-averse myself! It's hard and tiring to engage with conflicts (especially for volunteers not being paid). It's often seemingly easier to avoid. Conflict avoidance is a sort of buy-comfort-now-pay-for-problems-later thing.

  2. Rust feels very high stakes. It feels incredibly improbable. It frequently attracts a Significant Amount Of Internet Attention. It feels universally adored and universally hated at once. It feels like it could fail at any moment, but also like it's always on the cusp of victory. It creates in many a sort of siege mentality, which I'm sure I helped incubate (I'm an anxious paranoid) and I'm sure this mentality is part of what drives so many to dedicate so much of themselves to it. It may be fun and gratifying, but for many it becomes bigger, they give their all to it, dedicate their careers or sense of purpose, identity. A lot of jobs and futures are often on the line! And many just legitimately burn out, working too much, too hard. And those who don't often retain an internal level of commitment that makes de-escalation hard, it makes compromise hard, it makes even publicly admitting problems hard.

  3. One of Rust's weirdest cultural norms here -- again I might have helped make it and if so I apologize -- is to behave like conflicts are all the sort of "false tradeoffs" that can be solved like the speed-vs-safety tradeoff Rust famously claims to solve without compromise. That everything can be win-win, and if you don't find such a solution you're not trying hard enough. I mean, it's cool when that can happen, but sometimes things are just in legitimate tradeoff or conflict! Some things are zero-sum.

  4. The project has, especially for its size, fairly minimal formal structure with which to manage or resolve conflicts. It was built out of volunteers on the internet, and incubated in an organization (Mozilla) that itself had weak formal structure. Go read The Tyranny Of Structurelessness. Informal structures or strictly last-ditch formal structures (moderators or external options) are often the only thing anyone can see to grab onto.

  5. One informal structure that governs a lot of Rust and a lot of open source in general (and is mentioned in The Tyranny Of Structurelessness) is that of time commitment. The passionate contributor often is willing and able to spend a lot of time on the project. And that time commitment is not always available to others. This is often explicitly stated as a positive virtue: "people who do the work get to decide". But those decisions often affect many other stakeholders, and "the people with the most time" might not represent those stakeholders well, or might lack skills or knowledge for the task at hand. And "putting in more time than others" is also a way -- a fairly unhealthy one -- of dealing with a conflict: rather than addressing it head-on, you just wear the other side out. This can even be employed within a formal structure, if there's wiggle room for "how much input you contribute per unit time". Some people will show up to every city council meeting to push the same agenda. It works, you get your way, and it's not great.


Fixing

I don't really know how to fix these problems. If I knew I would most certainly make suggestions. I've said above I feel a fair amount of responsibility for setting some patterns, but of course to some extent the past is past and the future is what matters for the project most.

I guess my main suggestion is a don't-listen-to-me suggestion: "hire and listen to professionals with training in the subject", where "the subject" covers everything "a bunch of compiler nerds" are typically bad at. Project management to political science to finance to communications to mediation to personnel. The Project is now a decently large (and very diffuse) organization, and humans have studied how to run those for a long time, have categories of professionals who are expert in each topic. Listen to them. Don't try to work each out from first principles, and don't pretend that because you're a bunch of compiler nerds on the internet you get to dodge all the mechanisms of a normal organization.

I don't know to what extent the new governance system bears enough of the fingerprints of such professionals, and I don't know if it will or won't do much to address The Pattern, but I might be surprised. I'm not skilled in these areas! Casually and ignorantly speaking: I like the parts that sound like formal delineation of powers, and term limits and role rotation to avoid burnout, and transparency of decisions. I like things that sound like stakeholder representation and time-investment limitation.

I don't know that I see enough acceptance of the reality of conflict, and the need to resolve it explicitly. I don't know if it does enough to imbue positions of power in the project -- including informal power -- with accountability for their actions, or to communicate that publicly to instil confidence. Mainly: I don't know if it will do enough to make life livable for people who don't want to dedicate themselves to the project body and soul.

I personally haven't anywhere near the bandwidth, it's all just too much. To those participating, you have my best wishes. Good luck.

Footnote: The Foundation

I want to be clear on one point: the site of origin for problems in Rust's governance is, as far as I can tell, not the Rust Foundation, and the chorus of people jumping on the Foundation every time there's Some Drama In The Community is usually misplaced.

Again speaking only from what I've been able to tell as an outsider observing and listening on back-channels, the Foundation usually appears to be the Adults In The Room, and when it does something that seems superficially weird it's usually because it's trying to impossibly square some circle handed to it by the Project.

I think the Foundation actually does more or less just want to support the Project, and the Project is consistently not being a very easy thing to support.

Footnote: Corporate sponsors

Moreover: while the project is partly volunteer-driven and partly corporate-sponsored (often but not solely via Foundation members), and at times I believe corporate sponsorship produces bad incentives in maintainers to not do quite enough simple maintenance, I don't think in Rust's case this has ever gone in the direction people worry the most about: companies "buying influence" in conflicts or unpopular decisions, or otherwise hijacking the language.

That's a possible problem, but also one the Foundation's structure was substantially designed to minimize the risk of, and so far I think we're not seeing it.

Footnote: The original ("BDFL") question

To give the "no" and "no" answers at the top of this post a little more flesh: I don't like attention or stress, I was operating near my limits while I was project tech lead back in 2009-2013, and part of my own departure had to do with hitting those limits and kinda falling apart (as well as the company not really responding to that fact well -- see also "everyone is human"). Everything I've described here in terms of people's human fallibility applies to me in spades!

Additionally, I've no reason to believe I would have set up strong or healthy formal mechanisms for decision making, conflict management or delegation and scaling. I have no training in any of these subjects and was totally winging it within my role at Mozilla. Mozilla itself seemed to have little skill at these subjects either. The one time I tried to do a "formally structured decision" on Rust's design, I tried to hold a ranked-choice vote on keywords, and it went terribly: everyone hated the results.

Footnote: Moderation

I don't know the entire saga of the moderators of the Rust project -- I really haven't been involved since like 2013 -- and so I don't wish to imply anything specific about their past behaviour (especially since today's mods aren't yesterday's mods anyway). IMO that's a subject for someone informed to debate elsewhere. I do want to say two things:

  1. I wrote the original CoC (it was shorter and simpler then) and I stand by the notion that having written community norms and a process of enforcing them is generally a thing internet communities need. I don't think "having mods" is bad, or mods exercising their powers (with care and oversight) is bad.

  2. I do want to acknowledge that mods are human and can both act-out in unhealthy ways themselves, or be engaged-with in bad faith to be instruments of someone else's unhealthy acting-out. I think this is usually rare, I don't think a lot of the people who complain about this usually have a leg to stand on, but it's possible and it's fair to demand a heightened level of scrutiny of moderator behaviour to avoid the possibility. IME most good mods welcome the opportunity to leave a paper trail for their decisions and explain themselves, subject to the caveat of "not wanting to make things worse and/or engage with internet grief mobs".


Fotnote: Cliques and lack of transparency

Having friends you collaborate with is great! And doing stuff in private and not having to explain every little thing you think or say to randos on the internet is great! Neither of these things is a problem on their own; indeed these are often prerequisites for a lot of people feeling safe and comfortable and willing to participate at all. Being a Scrutinized Public Figure is often exhausting and can exclude people who don't have it in them, either by nature or circumstance. But a certain amount of transparency is a necessary part of making accountable decisions affecting other people -- part of the exercise of power.

comment count unavailable comments
Read the whole story
luizirber
10 days ago
reply
Davis, CA
Share this story
Delete

SIMDe 0.7.6 Released

2 Shares

I’m pleased to announce the availability of the latest release of SIMD Everywhere (SIMDe), version 0.7.6, representing more than two years of work by over 30 developers since version 0.7.2. (I also released 0.7.4 two weeks ago, but it needed a few more fixes; thanks go to the early adopters who helped me out.)

SIMDe is a permissively-licensed (MIT) header-only library which provides fast, portable implementations of SIMD intrinsics for platforms which aren’t natively supported by the API in question.

For example, with SIMDe you can use SSE, SSE2, SSE3, SSE4.1 and 4.2, AVX, AVX2, and many AVX-512 intrinsics on ARM, POWER, WebAssembly, or almost any platform with a C compiler. That includes, of course, x86 CPUs which don’t support the ISA extension in question (e.g., calling AVX-512F functions on a CPU which doesn’t natively support them).

If the target natively supports the SIMD extension in question there is no performance penalty for using SIMDe. Otherwise, accelerated implementations, such as NEON on ARM, AltiVec on POWER, WASM SIMD on WebAssembly, etc., are used when available to provide good performance.

SIMDe has already been used to port several packages to additional architectures through either upstream support or distribution packages, particularly on Debian.

What’s new in 0.7.4 / 0.7.6

  • 40 new ARM NEON families implemented
  • Initial support for ARM SVE API implementation (14 families)
  • Complete support for x86 F16C API
  • Initial support for MIPS MSA API
  • Nearly complete support for WASM SIMD128 C/C++ API
  • Initial support for the E2K (Elbrus) architecture
  • Initial support for LoongArch LASX/LSX and optimized implementations of some SSE intrinsics
  • MSVC has many fixes, now compiled in CI using /ARCH:AVX, /ARCH:AVX2, and /ARCH:AVX512
  • Minimum meson version is now 0.54

As always, we have an extensive test suite to verify our implementations.

For a complete list of changes, check out the 0.7.4 and 0.7.6 release notes.

Below are some additional highlights:

X86

There are a total of 7470 SIMD functions on x86, 2971 (39.77%) of which have been implemented in SIMDe so far. Specifically for AVX-512, of the 5270 functions currently in AVX-512, SIMDe implements 1439 (27.31%)

Completely supported functions families

Newly added function families

Additions to existing families

  • AVX512F: 579 additional, 856 total of 2660 (31.80%)
  • AVX512BW: 178 additional, 335 total of 828 (40.46%)
  • AVX512DQ: 77 additional, 111 total of 399 (27.82%)
  • AVX512_VBMI: 9 additional, 30 total of 30 💯%!
  • KNCNI: 113 additional, 114 total of 595 (19.16%)
  • VPCLMULQDQ: 1 additional, 2 total of 2 💯%!

Neon

SIMDe currently implements 56.46% of the ARM NEON functions (3766 out of 6670). If you don’t count 16-bit floats and poly types, it’s 75.95% (3766 / 4969).

Newly added families

  • addhn
  • bcax
  • cage
  • cmla
  • cmla_rot90
  • cmla_rot180
  • cmla_rot270
  • cvtn
  • fma
  • fma_lane
  • fma_n
  • ld2
  • ld4_lane
  • mla_lane
  • mlal_high_n
  • mlal_lane
  • mls_n
  • mlsl_high_n
  • mlsl_lane
  • mull_lane
  • qdmulh_lane
  • qdmulh_n
  • qrdmulh_lane
  • qrshrn_n
  • qrshrun_n
  • qshlu_n
  • qshrn_n
  • qshrun_n
  • recpe
  • recps
  • rshrn_n
  • rsqrte
  • rsqrts
  • shll_n
  • shrn_n
  • sqadd
  • sri_n
  • st2
  • st2_lane
  • st3_lane
  • st4_lane
  • subhn
  • subl_high
  • xar

MSA

Overall, SIMDe implementents 40 of 533 (7.50%) functions from MSA.

What is coming next

Work on SIMDe is proceeding rapidly, but there are a lot of functions to implement… x86 alone has about 8,000 SIMD functions, and we’ve implemented about 3,000 of them. We will keep adding more functions and improving the implementations we already have.

If you’re interested in using SIMDe but need some specific functions to be implemented first, please file an issue and we may be able to prioritize those functions.

Getting Involved

If you’re interested in helping out please get in touch. We have a chat room on Matrix/Element which is fairly active if you have questions, or of course you can just dive right in on the issue tracker.

Read the whole story
luizirber
16 days ago
reply
Davis, CA
biocrusoe
21 days ago
reply
Lansing, MI
Share this story
Delete

The .zip TLD sucks and it needs to be immediately revoked.

3 Shares

This shouldn't be allowed to happen. You might have been tricked into clicking this, assuming that the .zip in the URL was a filename. This is, of course, how it's been for decades. .zip isn't a valid part of a domain name! Except that Google has changed that.

On the design mistake of the .zip TLD

Throughout the 2010s, Google has easily been one of the most insidiously corrupting forces on the internet, rivaled by none. Its takeover of the modern web through utter domination of the search engine market, chokehold over web standards, and near complete monopolization of web browsers has rendered much of the world beholden to it.

As of May 3rd, Google has also decided to add a whole new dimension to the layers of evil and/or incompetence. You can now purchase .zip and .mov domain names, like the one this page resides on! Isn't that just fun for the entire family? And by entire family, I mainly mean poor ol' grandma, because in what universe will people less versed in this news expect for a link ending in .mov to actually take them to a website? There is nearly no way for the average person to learn this, outside of finding out the hard way.

As it stands, this is certainly not one of the most egregious things Google has done as of yet, but it is telling of just how bad we as a society have allowed things to get. The people who care are asleep at the wheel at best, some aren't with us anymore, and ICANN has failed all of us by allowing this to happen. For decades engineers have been working hard to try and make the internet less susceptible to phishing attacks, look-alike domains, etc., and now money men have decided to unravel that work so somebody can purchase anyword.zip as a domain name.

There is only one correct solution to this. It's to completely remove any and all of these egregious filename extension TLDs with no questions asked, and punish the people who pushed for this. I'll only consider this problem solved whenever this domain becomes completely unreachable and non-usable.

Shame on @google, and if they had any trace remembrance of the idea of shame before profit, they would stop registering new .zip domains. I may sound like a ghoul but maybe you do not understand the fundamental undermining this seemingly simple incursion has on user expectation. - @SwiftOnSecurity
Read the whole story
denubis
28 days ago
reply
luizirber
28 days ago
reply
Davis, CA
acdha
28 days ago
reply
Washington, DC
Share this story
Delete

Will A.I. Become the New McKinsey? | The New Yorker

3 Shares

When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, it’s become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers. The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.

So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.

A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.

The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse? Suppose you’ve built a semi-autonomous A.I. that’s entirely obedient to humans—one that repeatedly checks to make sure it hasn’t misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has.

Note that you cannot simply say that you will build A.I. that only offers pro-social solutions to the problems you ask it to solve. That’s the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinsey’s solutions will increase shareholder value more than your firm’s solutions will. It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.

Is there a way for A.I. to do something other than sharpen the knife blade of capitalism? Just to be clear, when I refer to capitalism, I’m not talking about the exchange of goods or services for prices determined by a market, which is a property of many economic systems. When I refer to capitalism, I’m talking about a specific relationship between capital and labor, in which private individuals who have money are able to profit off the effort of others. So, in the context of this discussion, whenever I criticize capitalism, I’m not criticizing the idea of selling things; I’m criticizing the idea that people who have lots of money get to wield power over people who actually work. And, more specifically, I’m criticizing the ever-growing concentration of wealth among an ever-smaller number of people, which may or may not be an intrinsic property of capitalism but which absolutely characterizes capitalism as it is practiced today.

As it is currently deployed, A.I. often amounts to an effort to analyze a task that human beings perform and figure out a way to replace the human being. Coincidentally, this is exactly the type of problem that management wants solved. As a result, A.I. assists capital at the expense of labor. There isn’t really anything like a labor-consulting firm that furthers the interests of workers. Is it possible for A.I. to take on that role? Can A.I. do anything to assist workers instead of management?

Some might say that it’s not the job of A.I. to oppose capitalism. That may be true, but it’s not the job of A.I. to strengthen capitalism, either. Yet that is what it currently does. If we cannot come up with ways for A.I. to reduce the concentration of wealth, then I’d say it’s hard to argue that A.I. is a neutral technology, let alone a beneficial one.

Many people think that A.I. will create more unemployment, and bring up universal basic income, or U.B.I., as a solution to that problem. In general, I like the idea of universal basic income; however, over time, I’ve become skeptical about the way that people who work in A.I. suggest U.B.I. as a response to A.I.-driven unemployment. It would be different if we already had universal basic income, but we don’t, so expressing support for it seems like a way for the people developing A.I. to pass the buck to the government. In effect, they are intensifying the problems that capitalism creates with the expectation that, when those problems become bad enough, the government will have no choice but to step in. As a strategy for making the world a better place, this seems dubious.

You may remember that, in the run-up to the 2016 election, the actress Susan Sarandon—who was a fervent supporter of Bernie Sanders—said that voting for Donald Trump would be better than voting for Hillary Clinton because it would bring about the revolution more quickly. I don’t know how deeply Sarandon had thought this through, but the Slovenian philosopher Slavoj Žižek said the same thing, and I’m pretty sure he had given a lot of thought to the matter. He argued that Trump’s election would be such a shock to the system that it would bring about change.

What Žižek advocated for is an example of an idea in political philosophy known as accelerationism. There are a lot of different versions of accelerationism, but the common thread uniting left-wing accelerationists is the notion that the only way to make things better is to make things worse. Accelerationism says that it’s futile to try to oppose or reform capitalism; instead, we have to exacerbate capitalism’s worst tendencies until the entire system breaks down. The only way to move beyond capitalism is to stomp on the gas pedal of neoliberalism until the engine explodes.

I suppose this is one way to bring about a better world, but, if it’s the approach that the A.I. industry is adopting, I want to make sure everyone is clear about what they’re working toward. By building A.I. to do jobs previously performed by people, A.I. researchers are increasing the concentration of wealth to such extreme levels that the only way to avoid societal collapse is for the government to step in. Intentionally or not, this is very similar to voting for Trump with the goal of bringing about a better world. And the rise of Trump illustrates the risks of pursuing accelerationism as a strategy: things can get very bad, and stay very bad for a long time, before they get better. In fact, you have no idea of how long it will take for things to get better; all you can be sure of is that there will be significant pain and suffering in the short and medium term.

I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.

People who criticize new technologies are sometimes called Luddites, but it’s helpful to clarify what the Luddites actually wanted. The main thing they were protesting was the fact that their wages were falling at the same time that factory owners’ profits were increasing, along with food prices. They were also protesting unsafe working conditions, the use of child labor, and the sale of shoddy goods that discredited the entire textile industry. The Luddites did not indiscriminately destroy machines; if a machine’s owner paid his workers well, they left it alone. The Luddites were not anti-technology; what they wanted was economic justice. They destroyed machinery as a way to get factory owners’ attention. The fact that the word “Luddite” is now used as an insult, a way of calling someone irrational and ignorant, is a result of a smear campaign by the forces of capital.

Whenever anyone accuses anyone else of being a Luddite, it’s worth asking, is the person being accused actually against technology? Or are they in favor of economic justice? And is the person making the accusation actually in favor of improving people’s lives? Or are they just trying to increase the private accumulation of capital?

Today, we find ourselves in a situation in which technology has become conflated with capitalism, which has in turn become conflated with the very notion of progress. If you try to criticize capitalism, you are accused of opposing both technology and progress. But what does progress even mean, if it doesn’t include better lives for people who work? What is the point of greater efficiency, if the money being saved isn’t going anywhere except into shareholders’ bank accounts? We should all strive to be Luddites, because we should all be more concerned with economic justice than with increasing the private accumulation of capital. We need to be able to criticize harmful uses of technology—and those include uses that benefit shareholders over workers—without being described as opponents of technology.

Read the whole story
vitormazzi
33 days ago
reply
Brasil
luizirber
36 days ago
reply
Davis, CA
acdha
36 days ago
reply
Washington, DC
Share this story
Delete

Silicon Valley elites are afraid. History says they should be

1 Share
The Main Quadrangle buildings, Stanford University on October 2, 2021. (Photo by David Madison/Getty Images)
PALO ALTO, CA - OCTOBER 2: A general view of the arches of the Main Quadrangle buildings on the campus of Stanford University before a college football game against the Oregon Ducks on October 2, 2021 played at Stanford Stadium in Palo Alto, California. (Photo by David Madison/Getty Images)
(David Madison / Getty Images)

Silicon Valley elites are afraid. History says they should be

Its become a common refrain among a certain set of Silicon Valley elite: Theyve been treated so unfairly. Case in point: Even after their bank of choice collapsed spectacularly

in no small part of their own doing

and the

federal government

moved

with dispatch

to guarantee all its deposits, tech execs and

investors

nonetheless spent

the subsequent days

loudly playing the victim.

The prominent venture

capitalist

David Sacks, who had lobbied particularly hard for government intervention, bemoaned a hateful media that will make me be whatever they need me to be in order to keep their attack machine going. Michael Solana, a

vice presidentVP

at Peter Thiels Founders Fund, wrote on his blog that tech is now universally hated, warned of an incoming political war, and claimed a lot of people

...who

genuinely seem to want a good old fashioned mass murder, presumably of tech execs.

It was a particularly galling display,

;

a new high for a trend thats been on the rise for some time. Amid congressional hearings and dipping stock valuations, the tech elite have bemoaned the so-called techlash against their industry by those who worry its grown too large and unaccountable.

Waving away legitimate questions about the industry's

labor

inequities

, climate impacts

and civil rights abuses

, they claim that the press is biased against them and that theyre besieged on all sides by "woke" critics.

If only they realized just how good they have it

, historically speaking

.

It was mere decades ago, after all, that the Silicon Valley elite faced the active threat of

actual, non-metaphorical

violence. The most adamant critics of Big Tech of the 1970s didnt write strongly worded columns chastising them in newspapers or blast their politics on social media they physically occupied their computer labs, destroyed their capital equipment, and even bombed their homes.

Techlash is what Silicon Valley's ownership class calls it when people don't buy their stock, author Malcolm Harris tells me. Today's tech billionaires are lucky people are making fun of them on the internet instead of firebombing their houses that's what happened to Bill Hewlett back in the day.

A 1987 article in this newspaper makes his point. When William Hewlett retired from the company he founded, Hewlett-Packard, or HP, as its known today,

the The

Times dedicated a full paragraph to the various threats of violence that the billionaire faced in the 1970s:

In 1971, radical animosities directed at the upscale Palo Alto community and Stanford University campus brought terror into the Hewletts lives: The modest Hewlett family home was fire-bombed. In 1976, son James, then 28, fought off would-be kidnapers. The same year, a radical group called the Red Guerrilla Family claimed responsibility when a bomb exploded in an HP building.

Harris is the author of "Palo Alto: A History of California, Capitalism, and the World," the book that is currently the talk of the town it just hit the L.A. Times bestseller list though not for the reasons that the

valley'

s

Valleys

elites might prefer. Its a robust, sprawling history thats intensely critical of the Great Men of tech history, and even more so of the systems they served. Its been received enthusiastically, as an overdue corrective to the industrys potent penchant for self-mythology.

Palo Alto: Billionaire playground or Darwinian hellscape? Why not both?

And some of the most potent mythologies, of course, rely on omission. Take, for instance, the popular narrative that whiz kids

such aslike Bill

Hewlett and Steve Jobs started the computer revolutions from their garages in Palo Alto, where their starkest opposition came in the form of square old corporations

like such as

IBM and Xerox and not actual, bomb-throwing revolutionaries.

Harris

s

work reminds us that this was far from the case. There was a movement far more organized, far more militant, and far more sharply opposed to the Big Tech companies of the day than anything weve seen in the last

ten 10

years

, and its not even close

.

When we think of the 1960s in California, we think of disparate, panoramic happenings in an explosive decade; the war in Vietnam, the rise of the computer, the student protest movement, and so on. But Harris argues that the computer revolution didnt simply coexist with the war it fueled it.

These developments werent just connected, Harris writes, they were the same thing.

Intel and Hewlett-Packard revolutionized

microchipsthe microchip

, alright, but they sold them to the U.S. military, which used them to guide the weapons of war it was deploying in

Southeast

Asia. To the students, activists and organizers of the so-called New Left, Silicon Valley was hard-wiring the war effort. It was an instrument of oppression, and it had blood on its hands.

All this set the stage for a revolt against Silicon Valleys core operators. Palo Alto radicals singled out Stanfords industrial community and its role in the Vietnam War specifically and capitalist imperialism generally,"

Harris writes.

"And once they got their collective finger pointed in the right place, they attacked.

Thats not a

figure of speecheuphemism

either. They really, quite physically, attacked the people and infrastructure of Silicon Valley that were connected to the war effort.

The New Left tried to blow up more or less every computer they could get their hands on, Harris

says

. And since both were likely to be found on college campuses, they got their hands on a bunch of them. (At the time, remember, there was no PC computers were still room-sized machines.)

Column: Silicon Valley Bank broke. Silicon Valley is broken

The reasoning was simple: These computers were making the war possible, both by providing the physical hardware for missile targeting systems and such, and by processing data used to plan combat missions. The war caused untold suffering and death; dismantle the war machine, hamper the war effort. So thats exactly what members of Stanfords left

ist

organizers, affiliated with groups

like such as

Students for a Democratic Society,

or SDS,(SDS),

tried to do.

First, they attempted peaceful tactics,

such as like

a pressure campaign to halt the manufacture of napalm. It didnt work. So, taking their cues from the Black Panther Party, which was at the time perhaps the most powerful and influential radical left group in the nation, Stanford students and even faculty adopted direct and militant tactics. They published maps of the high-profile tech companies and research offices in Palo Alto that had won defense contracts or were otherwise involved in the war effort.

After the U.S. military bombed Cambodia, the student left escalate

d

its tactics by targeting the very data processing infrastructure that was aiding the war effort.

They occupied the Applied Electronics Laboratory in Stanford

itself

. The AEL was an on-campus lab that was carrying out classified research for the war effort for the Pentagon, and students moved to shut it down. The occupation ended with a major concession: that classified military research no longer would be conducted on campus, and that its resources would be used instead for community purposes.

The victory helped inspire

d

copycat actions across the

countrystate

and even more militant ones. Students and activists bombed or destroyed with acid computer labs at Boston University, Loyola University, Fresno State, the University of Kansas

and

the University of Wisconsin, among others, causing millions of dollars in damage. The explosion at the

University of Wisconsin-UW -Madison UW by itself usually means university of washington, not sure how LAT refers to Madison on 2nd ref

killed Robert Fassnacht, a

postdoc postdoctoral

researcher who, unbeknownst to the saboteurs, had been working late at night. IBM offices in San Jose and New York were bombed

,

too.

With momentum at their backs, Stanford radicals decided to up the stakes

,

and to occupy an even larger target: the Stanford Research Institute, or SRI, an off-campus research center that was overseen by the universitys board of trustees and that had won enormous military contracts.

Stanford is the nerve center of this complex, which now does over 10% of the Pentagon's research and development, activists wrote in a flier promoting the action. It lambasted

s

the socialized profits for the rich generated by the SRI, and how it was

s

used to produce weapons to put down insurgents at home and in the Third World.

,

and it calls out HP founder David Packard by nameIt boasts that as a result of their previous actions, a number of SRI researchers quit and the Institute was passed over for DoD funding.

This flier had a map

,

too, with the pertinent Big Tech buildings circled; Hewlett-Packard, Varian, SRI. It was labeled How to Destroy an Empire.

On May 19, 1969, they moved to shut down the SRI.

It was a militant movement, and it was effective. It deterred investment in the war effort, made universities rethink their involvement with the

DoD Department of Defense,

and

it

contributed to the eventual withdrawal and policy reforms won by the broader anti

-

war movement.

So why dont we remember it much? Why do we remember the summer of love and communitarian counterculture and the Whole Earth Catalog but not a violent struggle over the deployment of technology and those who profited from it?

Or as Harris puts it: Why are we more likely to hear about the Yippies trying to levitate the Pentagon than SDS successfully bombing the Pentagon?

One reason is pretty simple: Its a feel-bad story that complicates the narrative that has grown increasingly central to how we understand the history of how our technology was invented and produced.

In Silicon Valley in particular, the clear anti-tech strategy of the anti-war movement is inconvenient for the predominant hippies invented the Internet narrative, Harris says, so many of the region's historians have shunted that part aside.

But the fear remains. Even if theres been nothing resembling organized threats on their well-being guillotine memes on Twitter dont count todays tech elites can certainly feel the resentment brewing.

Maybe that's why they're so sensitive to the suggestion that the government rescue of SVB was a venture capitalist bailout that it was more special treatment for a constituency that drives Model Xs to their Tahoe ski chalets, that wants to reap the rewards of investing in world-changing technologies

,

while bearing so little of the actual risk. Much of todays most visible tech set knows that lots of people dont like the inequality they represent, the preferential treatment they seem to enjoy, and the forces their companies and investments have set in motion.

They surely see Amazon workers and Uber drivers becoming increasingly agitated and organized, and openly pushing for change against gross inequalities. They see movements for gender equality and climate justice at Google and Microsoft.

They see the outrage over the fact that, like its forebears in Hewlett-Packard and earlier Silicon Valley companies, the newest iteration of Big Tech has become a major defense contractor too Google, Amazon and Microsoft have vied to provide cloud,

AI

artificial intelligence and robotics to the military and they see movements opposing it, as in the #TechWontBuildIt effort, where tech workers campaigned to reject such projects. (And hey, HP is still a defense contractor.) They see backlash against social media companies giving authoritarian regimes the tools to commit atrocities. If they knew to look, todays tech elites might see a lot of the same kindling that was laid on the ground in the combustible '60s.

They think about this stuff constantly, but it's in the build-a-killer-robot-army way, not the Patagonia way, Harris says, referring to the former Patagonia billionaire Yvon Chouinard, who gave away his entire company as a means of combating the ills of extreme wealth.

In other words, theyd rather keep up the flame wars on social media and build survival bunkers in Montana than address the social ills their critics charge them with exacerbating.

I think they are very, very worried, Harris says. If history is any precedent perhaps they should be.

Read the whole story
luizirber
56 days ago
reply
Davis, CA
Share this story
Delete
Next Page of Stories