902 stories
·
28 followers

18-09-2023

1 Share


 

Read the whole story
luizirber
67 days ago
reply
Davis, CA
Share this story
Delete

The Obvious Warning Sign

1 Share

Over the last year or two, I’ve started to get back into gaming after a long break—and a lot of the games I have been appreciating have been of the indie variety. The starting point for me came with a Stadia subscription in which I really got into Celeste. (I died a lot.) I eventually switched to the better-in-every-way Xbox Game Pass—which gave me my first embrace of such amazing modern titles as Chained Echoes and Hollow Knight, but then finally landed in the Steam ecosystem, where I found myself embracing Dave The Diver over the summer. (In case you’re curious, I’m just starting to embrace the insanity of Pizza Tower.)

Gaming has become a mental chill zone for me, a way to hit the pause button on a busy mind.

I’m not a console gamer at this point—and I admit my taste leans a little more in the retro direction, largely because of what I grew up with. But if you were to ask me who created these games, I would most assuredly say their developers. I understand that, at a higher level, developers use tools that come with their own sunk costs—they have to pay for the laptops, the development environments, and the tooling. But when it comes down to it, the spark of creation is all theirs.

Which is why the Unity situation is so infuriating. It’s not that we don’t understand the fact that Unity, in creating a gaming-oriented development environment, created something important by building a development infrastructure and ecosystem—of course they did. But at some point, their role in the creation should end. They should remain the helping hand behind the scenes—not the constant reminder in front of them.

But with some policy sleight-of-hand and a sudden rate change, that is very much not the case. They want to charge developers per install for the right to use their runtime. It’s a classic case of the studio upstaging the director and the cast which … is kind of a trend right now.


1440

News Without Motives. 1440 is the daily newsletter helping 2M+ Americans stay informed—it’s news without motives, edited to be unbiased as humanly possible. The team at 1440 scours over 100+ sources so you don't have to. Culture, science, sports, politics, business, and everything in between—in a five-minute read each morning, 100% free. Subscribe here.


I’m not revelaing any new information by stating that this went over really poorly and ended up biting Unity pretty hard. They are going to course-correct, but my guess is that the stain it left behind will be permanent.

I think the Unity news points to the the same disconnect that a lot of people feel when we hear that Apple gets 30 percent of the cut for your App Store purchase. Apple made the phone and the ecosystem—at which point do they stop getting a cut? It’s not like Apple can’t live with $70 billion in quarterly revenue rather than $80 billion.

From a preservation perspective, this news introduces all sorts of new, messy problems that I anticipate will only get worse over time. We are going to see old games disappear from the internet, games that should simply be able to stay alive for people who want to enjoy them. But instead, we’re stuck dealing with corporate greed from a company that sold itself on the promise of supporting developer independence.

It’s a reminder, one we have so desperately needed someone to make: All corporations, even the ones with good intentions, are at risk of breaking the unspoken contract of fairness with their users. When the technology is mature and the motivation to innovate with the tools starts to fade, the force of commercial interests will eventually bend on that contract, until it breaks … causing untold damage. This is especially true if stockholders are the ones calling the shots at a higher level.

(Unity, of course, does not have its market to itself, but if you’ve been developing your game in Unity for four years, you’re stuck using them.)

When you could simply buy a box in a store, it limited the ways that companies could damage your bottom line over time. Now, we live in a world where everything is a potential ongoing bill. That changes ownership, it changes copyright, and it changes our relationship with stuff. End users, suddenly stuck with a dozen ongoing bills for random things they somehow can’t get rid of, are on a treadmill that threatens to knock them over—and it’s likely that you may have to pay for some of those bills even after you’ve been dead in the ground for 50 years.

Unity adding this additional charge onto indie developers, even if they cancel it, shows how these pressures are just going to get worse and worse as market power consolidates. Every level of society is just going to find itself buried by ongoing charges from an as-a-service model that powers an economic machine that too often takes more than it gives.

It’s brutal. We need to get out of this trap.

Unity is the warning sign. It is the alarm we need to understand that we need to build more of our technology infrastructure on fundamental, renewable models. In the case of Unity, it is the sign to go open source. In the case of end users, we need to embrace technology that do one of three things:

  • First, sell us products with a designated end point, when possible, and no need for an ongoing relationship if we so choose.
  • Second, give us options to extend the things we bought outside of their respective ecosystems.
  • Finally, allow for the use of technology in a free-as-in-speech open source format. If they want to charge on an ongoing basis, there should be ongoing value in exchange, not rent-seeking. If you’re getting free beer, you’re getting pitched something a mile down the road.

Let’s look at the Unity situation as what it is: A company using the strength of its tree trunk to shake a few more pears out of the tree. It knows it will not fall over, no matter how hard someone shakes.

But the people stuck up in the branches might. Perhaps we should stop relying on trees to secure our fortunes.

 
 
Awakening Links

I must say, it’s nice to see Rolling Stone so willing to not go to bat for its cofounder and longtime leader over some pretty awful comments [NYTimes link]. Justice for Joni and Stevie!

Did you know you could once buy silence on jukeboxes back in the 1940s and 1950s? VWestlife has the scoop on something I kinda wish I could get at a Starbucks sometime.

If you want some stronger understanding of the strike, I recommend this Hollywood Reporter breakdown of the AMPTP, the key negotiating arm of the studios.

 
 

Find this one interesting? Share it with a pal!

Is now the time to upgrade your news diet? Our sponsor 1440 can help.



Read the whole story
luizirber
78 days ago
reply
Davis, CA
Share this story
Delete

Pluralistic: "Open" "AI" isn't (18 August 2023)

3 Shares


Today's links



Tux the Penguin, posed on a Matrix credit-sequence 'code waterfall.' His eyes have been replaced with the menacing red eyes of HAL 9000 from Kubrick's '2001: A Space Odyssey.'

"Open" "AI" isn't (permalink)

The crybabies who freak out about The Communist Manifesto appearing on university curriculum clearly never read it – chapter one is basically a long hymn to capitalism's flexibility and inventiveness, its ability to change form and adapt itself to everything the world throws at it and come out on top:

https://www.marxists.org/archive/marx/works/1848/communist-manifesto/ch01.htm#007

Today, leftists signal this protean capacity of capital with the -washing suffix: greenwashing, genderwashing, queerwashing, wokewashing – all the ways capital cloaks itself in liberatory, progressive values, while still serving as a force for extraction, exploitation, and political corruption.

A smart capitalist is someone who, sensing the outrage at a world run by 150 old white guys in boardrooms, proposes replacing half of them with women, queers, and people of color. This is a superficial maneuver, sure, but it's an incredibly effective one.

In "Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI," a new working paper, Meredith Whittaker, David Gray Widder and Sarah B Myers document a new kind of -washing: openwashing:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4543807

Openwashing is the trick that large "AI" companies use to evade regulation and neutralizing critics, by casting themselves as forces of ethical capitalism, committed to the virtue of openness. No one should be surprised to learn that the products of the "open" wing of an industry whose products are neither "artificial," nor "intelligent," are also not "open." Every word AI huxters say is a lie; including "and," and "the."

So what work does the "open" in "open AI" do? "Open" here is supposed to invoke the "open" in "open source," a movement that emphasizes a software development methodology that promotes code transparency, reusability and extensibility, which are three important virtues.

But "open source" itself is an offshoot of a more foundational movement, the Free Software movement, whose goal is to promote freedom, and whose method is openness. The point of software freedom was technological self-determination, the right of technology users to decide not just what their technology does, but who it does it to and who it does it for:

https://locusmag.com/2022/01/cory-doctorow-science-fiction-is-a-luddite-literature/

The open source split from free software was ostensibly driven by the need to reassure investors and businesspeople so they would join the movement. The "free" in free software is (deliberately) ambiguous, a bit of wordplay that sometimes misleads people into thinking it means "Free as in Beer" when really it means "Free as in Speech" (in Romance languages, these distinctions are captured by translating "free" as "libre" rather than "gratis").

The idea behind open source was to rebrand free software in a less ambiguous – and more instrumental – package that stressed cost-savings and software quality, as well as "ecosystem benefits" from a co-operative form of development that recruited tinkerers, independents, and rivals to contribute to a robust infrastructural commons.

But "open" doesn't merely resolve the linguistic ambiguity of libre vs gratis – it does so by removing the "liberty" from "libre," the "freedom" from "free." "Open" changes the pole-star that movement participants follow as they set their course. Rather than asking "Which course of action makes us more free?" they ask, "Which course of action makes our software better?"

Thus, by dribs and drabs, the freedom leeches out of openness. Today's tech giants have mobilized "open" to create a two-tier system: the largest tech firms enjoy broad freedom themselves – they alone get to decide how their software stack is configured. But for all of us who rely on that (increasingly unavoidable) software stack, all we have is "open": the ability to peer inside that software and see how it works, and perhaps suggest improvements to it:

https://www.youtube.com/watch?v=vBknF2yUZZ8

In the Big Tech internet, it's freedom for them, openness for us. "Openness" – transparency, reusability and extensibility – is valuable, but it shouldn't be mistaken for technological self-determination. As the tech sector becomes ever-more concentrated, the limits of openness become more apparent.

But even by those standards, the openness of "open AI" is thin gruel indeed (that goes triple for the company that calls itself "OpenAI," which is a particularly egregious openwasher).

The paper's authors start by suggesting that the "open" in "open AI" is meant to imply that an "open AI" can be scratch-built by competitors (or even hobbyists), but that this isn't true. Not only is the material that "open AI" companies publish insufficient for reproducing their products, even if those gaps were plugged, the resource burden required to do so is so intense that only the largest companies could do so.

Beyond this, the "open" parts of "open AI" are insufficient for achieving the other claimed benefits of "open AI": they don't promote auditing, or safety, or competition. Indeed, they often cut against these goals.

"Open AI" is a wordgame that exploits the malleability of "open," but also the ambiguity of the term "AI": "a grab bag of approaches, not… a technical term of art, but more … marketing and a signifier of aspirations." Hitching this vague term to "open" creates all kinds of bait-and-switch opportunities.

That's how you get Meta claiming that LLaMa2 is "open source," despite being licensed in a way that is absolutely incompatible with any widely accepted definition of the term:

https://blog.opensource.org/metas-llama-2-license-is-not-open-source/

LLaMa-2 is a particularly egregious openwashing example, but there are plenty of other ways that "open" is misleadingly applied to AI: sometimes it means you can see the source code, sometimes that you can see the training data, and sometimes that you can tune a model, all to different degrees, alone and in combination.

But even the most "open" systems can't be independently replicated, due to raw computing requirements. This isn't the fault of the AI industry – the computational intensity is a fact, not a choice – but when the AI industry claims that "open" will "democratize" AI, they are hiding the ball. People who hear these "democratization" claims (especially policymakers) are thinking about entrepreneurial kids in garages, but unless these kids have access to multi-billion-dollar data centers, they can't be "disruptors" who topple tech giants with cool new ideas. At best, they can hope to pay rent to those giants for access to their compute grids, in order to create products and services at the margin that rely on existing products, rather than displacing them.

The "open" story, with its claims of democratization, is an especially important one in the context of regulation. In Europe, where a variety of AI regulations have been proposed, the AI industry has co-opted the open source movement's hard-won narrative battles about the harms of ill-considered regulation.

For open source (and free software) advocates, many tech regulations aimed at taming large, abusive companies – such as requirements to surveil and control users to extinguish toxic behavior – wreak collateral damage on the free, open, user-centric systems that we see as superior alternatives to Big Tech. This leads to the paradoxical effect of passing regulation to "punish" Big Tech that end up simply shaving an infinitesimal percentage off the giants' profits, while destroying the small co-ops, nonprofits and startups before they can grow to be a viable alternative.

The years-long fight to get regulators to understand this risk has been waged by principled actors working for subsistence nonprofit wages or for free, and now the AI industry is capitalizing on lawmakers' hard-won consideration for collateral damage by claiming to be "open AI" and thus vulnerable to overbroad regulation.

But the "open" projects that lawmakers have been coached to value are precious because they deliver a level playing field, competition, innovation and democratization – all things that "open AI" fails to deliver. The regulations the AI industry is fighting also don't necessarily implicate the speech implications that are core to protecting free software:

https://www.eff.org/deeplinks/2015/04/remembering-case-established-code-speech

Just think about LLaMa-2. You can download it for free, along with the model weights it relies on – but not detailed specs for the data that was used in its training. And the source-code is licensed under a homebrewed license cooked up by Meta's lawyers, a license that only glancingly resembles anything from the Open Source Definition:

https://opensource.org/osd/

Core to Big Tech companies' "open AI" offerings are tools, like Meta's PyTorch and Google's TensorFlow. These tools are indeed "open source," licensed under real OSS terms. But they are designed and maintained by the companies that sponsor them, and optimize for the proprietary back-ends each company offers in its own cloud. When programmers train themselves to develop in these environments, they are gaining expertise in adding value to a monopolist's ecosystem, locking themselves in with their own expertise. This a classic example of software freedom for tech giants and open source for the rest of us.

One way to understand how "open" can produce a lock-in that "free" might prevent is to think of Android: Android is an open platform in the sense that its sourcecode is freely licensed, but the existence of Android doesn't make it any easier to challenge the mobile OS duopoly with a new mobile OS; nor does it make it easier to switch from Android to iOS and vice versa.

Another example: MongoDB, a free/open database tool that was adopted by Amazon, which subsequently forked the codebase and tuning it to work on their proprietary cloud infrastructure.

The value of open tooling as a stickytrap for creating a pool of developers who end up as sharecroppers who are glued to a specific company's closed infrastructure is well-understood and openly acknowledged by "open AI" companies. Zuckerberg boasts about how PyTorch ropes developers into Meta's stack, "when there are opportunities to make integrations with products, [so] it’s much easier to make sure that developers and other folks are compatible with the things that we need in the way that our systems work."

Tooling is a relatively obscure issue, primarily debated by developers. A much broader debate has raged over training data – how it is acquired, labeled, sorted and used. Many of the biggest "open AI" companies are totally opaque when it comes to training data. Google and OpenAI won't even say how many pieces of data went into their models' training – let alone which data they used.

Other "open AI" companies use publicly available datasets like the Pile and CommonCrawl. But you can't replicate their models by shoveling these datasets into an algorithm. Each one has to be groomed – labeled, sorted, de-duplicated, and otherwise filtered. Many "open" models merge these datasets with other, proprietary sets, in varying (and secret) proportions.

Quality filtering and labeling for training data is incredibly expensive and labor-intensive, and involves some of the most exploitative and traumatizing clickwork in the world, as poorly paid workers in the Global South make pennies for reviewing data that includes graphic violence, rape, and gore.

Not only is the product of this "data pipeline" kept a secret by "open" companies, the very nature of the pipeline is likewise cloaked in mystery, in order to obscure the exploitative labor relations it embodies (the joke that "AI" stands for "absent Indians" comes out of the South Asian clickwork industry).

The most common "open" in "open AI" is a model that arrives built and trained, which is "open" in the sense that end-users can "fine-tune" it – usually while running it on the manufacturer's own proprietary cloud hardware, under that company's supervision and surveillance. These tunable models are undocumented blobs, not the rigorously peer-reviewed transparent tools celebrated by the open source movement.

If "open" was a way to transform "free software" from an ethical proposition to an efficient methodology for developing high-quality software; then "open AI" is a way to transform "open source" into a rent-extracting black box.

Some "open AI" has slipped out of the corporate silo. Meta's LLaMa was leaked by early testers, republished on 4chan, and is now in the wild. Some exciting stuff has emerged from this, but despite this work happening outside of Meta's control, it is not without benefits to Meta. As an infamous leaked Google memo explains:

Paradoxically, the one clear winner in all of this is Meta. Because the leaked model was theirs, they have effectively garnered an entire planet's worth of free labor. Since most open source innovation is happening on top of their architecture, there is nothing stopping them from directly incorporating it into their products.

https://www.searchenginejournal.com/leaked-google-memo-admits-defeat-by-open-source-ai/486290/

Thus, "open AI" is best understood as "as free product development" for large, well-capitalized AI companies, conducted by tinkerers who will not be able to escape these giants' proprietary compute silos and opaque training corpuses, and whose work product is guaranteed to be compatible with the giants' own systems.

The instrumental story about the virtues of "open" often invoke auditability: the fact that anyone can look at the source code makes it easier for bugs to be identified. But as open source projects have learned the hard way, the fact that anyone can audit your widely used, high-stakes code doesn't mean that anyone will.

The Heartbleed vulnerability in OpenSSL was a wake-up call for the open source movement – a bug that endangered every secure webserver connection in the world, which had hidden in plain sight for years. The result was an admirable and successful effort to build institutions whose job it is to actually make use of open source transparency to conduct regular, deep, systemic audits.

In other words, "open" is a necessary, but insufficient, precondition for auditing. But when the "open AI" movement touts its "safety" thanks to its "auditability," it fails to describe any steps it is taking to replicate these auditing institutions – how they'll be constituted, funded and directed. The story starts and ends with "transparency" and then makes the unjustifiable leap to "safety," without any intermediate steps about how the one will turn into the other.

It's a Magic Underpants Gnome story, in other words:

Step One: Transparency

Step Two: ??

Step Three: Safety

https://www.youtube.com/watch?v=a5ih_TQWqCA

Meanwhile, OpenAI itself has gone on record as objecting to "burdensome mechanisms like licenses or audits" as an impediment to "innovation" – all the while arguing that these "burdensome mechanisms" should be mandatory for rival offerings that are more advanced than its own. To call this a "transparent ruse" is to do violence to good, hardworking transparent ruses all the world over:

https://openai.com/blog/governance-of-superintelligence

Some "open AI" is much more open than the industry dominating offerings. There's EleutherAI, a donor-supported nonprofit whose model comes with documentation and code, licensed Apache 2.0. There are also some smaller academic offerings: Vicuna (UCSD/CMU/Berkeley); Koala (Berkeley) and Alpaca (Stanford).

These are indeed more open (though Alpaca – which ran on a laptop – had to be withdrawn because it "hallucinated" so profusely). But to the extent that the "open AI" movement invokes (or cares about) these projects, it is in order to brandish them before hostile policymakers and say, "Won't someone please think of the academics?" These are the poster children for proposals like exempting AI from antitrust enforcement, but they're not significant players in the "open AI" industry, nor are they likely to be for so long as the largest companies are running the show:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4493900

(Image: Cryteria, CC BY 3.0, modified)


Hey look at this (permalink)



A Wayback Machine banner.

This day in history (permalink)

#15yrsago Olympic logo cops enforce stupid rules with masking tape https://www.wsj.com/articles/SB121885240984946511

#15yrsago Howard Zinn’s “A People’s History of American Empire” graphic novel https://memex.craphound.com/2008/08/18/howard-zinns-a-peoples-history-of-american-empire-graphic-novel/

#5yrsago Spider Robinson’s joke https://memex.craphound.com/2018/08/18/spider-robinsons-joke/

#5yrsago Antivirus maker Sentinelone uses copyright claims to censor video of security research that revealed defects in its products https://www.theregister.com/2018/08/18/sentinelone_bsides_copyright_takedown/

#5yrsago Catholic League insists that it’s only rape when priests “penetrate” children https://web.archive.org/web/20180819033638/http://www.patheos.com/blogs/progressivesecularhumanist/2018/08/catholic-league-on-predatory-priests-its-not-rape-if-the-child-isnt-penetrated/

#5yrsago Criminals have perfected the art of taking over dead peoples’ online accounts https://www.malwarebytes.com/blog/news/2018/03/the-digital-entropy-of-death-what-happens-to-your-online-accounts-when-you-die



Colophon (permalink)

Today's top sources:

Currently writing:

  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS JAN 2025

  • The Bezzle, a Martin Hench noir thriller novel about the prison-tech industry. FORTHCOMING TOR BOOKS FEB 2024

  • Vigilant, Little Brother short story about remote invigilation. FORTHCOMING ON TOR.COM

  • Moral Hazard, a short story for MIT Tech Review's 12 Tomorrows. FIRST DRAFT COMPLETE, ACCEPTED FOR PUBLICATION

  • Spill, a Little Brother short story about pipeline protests. FORTHCOMING ON TOR.COM

Latest podcast: The Internet Con: How to Seize the Means of Computation (audiobook outtake) https://craphound.com/news/2023/08/01/the-internet-con-how-to-seize-the-means-of-computation-audiobook-outtake/

Upcoming appearances:

Recent appearances:

Latest books:

Upcoming books:

  • The Internet Con: A nonfiction book about interoperability and Big Tech, Verso, September 2023

  • The Lost Cause: a post-Green New Deal eco-topian novel about truth and reconciliation with white nationalist militias, Tor Books, November 2023


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

(Latest Medium column: "Enshitternet: The old, good internet deserves a new, good internet" https://doctorow.medium.com/enshitternet-c1d4252e5c6b)

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

Read the whole story
luizirber
107 days ago
reply
Davis, CA
Share this story
Delete

At a Societal Level "Can’t Afford” Is Bullshit

2 Shares
At a Societal Level

John Maynard Keynes – “Anything we can do, we can afford.”

Money tells you how much of society’s resources you can command. Can you use that building, that land or that person?

It’s more than that, of course, but this is its most fundamental use.

But we are all familiar with the fact that often there are empty buildings, unemployed people and land which is not being used productively. We also know that often what people, land and buildings are being used to do is a bad: a net negative.

99% of Wall Street, for just one example. 85% of the US military, for a second.

We use money so much that we forget that it’s only a proxy. What matters is the actual resources: do we have enough, epople, land, buildings, oil, steel, and other resources to do something or can we get the resources we need, either by moving people and other resources away from bad stuff to good stuff, or by creating more resources.

If we have enough or can create enough or can redistribute enough resources, we can do that thing, whatever it is. The limit isn’t money, the limit is actual, real resources.

Estimates of bullshit jobs are at about 40% or so. I’d personally put it higher: jobs that either are pointless or actively harmful are the majority of what we do.

We can do plenty, any time we really, as societies, want to.

There will come a time, and that time is not so distant, where we can’t. Where real and painful decisions will have to be made, but we are still, in most of the developed world, in a surplus situation, with a lot of resources mis-allocated. Reallocate them and we can fix many of our problems and mitigate almost all the rest, while actually improving human and animal welfare massively.

“Afford” is a word for people, not a word for governments who can print money. For them, the question is “does the country have the resources and can they be mobilized?”

We can make the world and ourselves better when we want to, at least for now.

 


This is a donor supported site, so if you value the writing, please DONATE or SUBSCRIBE.

Read the whole story
luizirber
128 days ago
reply
Davis, CA
sarcozona
128 days ago
reply
Epiphyte City
Share this story
Delete

Saturday Morning Breakfast Cereal - Sad

6 Shares


Click here to go see the bonus panel!

Hovertext:
Actually with this new plugin the robot doesn't need the sympathy but can just directly experience a mixture of awe and joy.


Today's News:
Read the whole story
luizirber
168 days ago
reply
Davis, CA
denubis
177 days ago
reply
Share this story
Delete

23-05-2023

2 Shares


 

Read the whole story
iaravps
151 days ago
reply
Rio de Janeiro, Brasil
luizirber
177 days ago
reply
Davis, CA
Share this story
Delete
Next Page of Stories