Automated Society_

Welcome to issue 127 of Automated Society!

Writing about uberization for this week's edition made me realize it had been ages since I last ordered an Uber. Digging into my inbox, I found a December email alerting me that my account would be closed for inactivity if I didn't log in within the next 30 days. Which I never did. To my surprise, I could easily log in the other night and order a cab. This once again proves that we shouldn't let service providers nudge us with their lame ultimatums.

This email was forwarded to you? You can subscribe here.

Naiara Bellio
Naiara Bellio – reporter
 
The Briefing

The healthcare uberization is slipping under the radar

Gig nursing. The European office of the World Health Organization blames bad working conditions for the shortage of nurses across the European Union. Qualified professionals were often discouraged by long hours, high patient-to-staff ratios and emotional strain to apply for vacancies. While the COVID-19 pandemic led to an exponential increase in demand for nurses and nursing assistants, this surge was not always met with high-standard working conditions. As a result, “gig nursing” apps have become increasingly common.

Just like with platforms such as Uber or Glovo, nurses, healthcare professionals and daycare workers sign up to algorithmically managed apps that connect them to hospitals, medical centers and households in need of temporary staff. These kinds of apps have been used in countries such as Germany, Italy, France, Spain or the United Kingdom. However, their adoption is most widespread in the United States – mostly due to its privatized healthcare system and the ongoing staffing shortages that have plunged the sector into a crisis.

Workers’ experiences. Such apps reproduce many of the structural problems that also affect other industries that rely on algorithms: unreliable schedules, uncertainty about the amount or nature of work, little to no accountability for workers’ safety and the use of automated systems to rate their performance.

A recent report by the Roosevelt Institute compiles testimonies from around 30 nurses and nursing assistants who make a living by using apps such as CareRev, Clipboard Health or ShiftMed, the three largest gig nursing platforms in the US. CareRev allows hospitals to cancel a shift just two hours before it starts, with no regard for workers’ organizational needs due to childcare obligations or other commitments. However, if nurses cancel their shifts, they are penalized and their rating decreases, which results in a lower income and fewer job allocations. The ShiftKey app forces users to bid for shifts against each other – the winner is usually the one who indicates the lowest hourly rate. ShiftMed evaluates nurses’ “reliability” based on their willingness to work overtime.

Structural discrimination. Tech companies often promote their gig work apps by offering workers “freedom, flexibility, and autonomy,” while framing fixed salaries and predictable shifts as undesirable. They systematically target structurally discriminated groups – delivery apps are mostly used by migrants and nursing apps are predominantly used by women (only two out of the 29 surveyed in the Roosevelt report were men).

Despite the precarious conditions, 19 out of 29 interviewees said they would not stop using the apps because they liked their jobs. Two people even said that the apps were the reason why they remained working in the healthcare sector.

Looking at such testimonials, it is not clear to me whether the uberization of health care puts people between a rock and a hard place or the gig workers have become used to precarious work. The researchers have not come to a conclusion either. They suggest that this would expose how unregulated technology only exacerbates the erosion of labor standards in some industries, healthcare being one of them: “The risks and concerns that workers expressed will not be automated away if the current algorithmic systems are replaced by better ones and trained on more data with more use cases,” they say.

And worldwide? In India, the gig economy has exploded as well. Previous research suggests that the freedom to choose how and when to work has become more important to workers than job security and the violation of workers' rights. In Canada, digital labor platforms are considered to create inequalities between temporary and staffed workers.

The Origami research project sees the emergence of digital platforms contributing to a redesign of European welfare systems; the European Platform Work Directive is a reaction to this development. It bans platforms from processing certain kinds of personal information – such as the communications that take place through the apps. It also obliges them to inform workers about how the apps' algorithms make decisions about their shifts, earnings, eligibility, etc.

I asked three major European trade unions about the surge of such platforms. The European Federation of Public Services Union sees an “emergence of digital platforms for the public good” while highlighting accompanying problems such as the fragmentation of labor and the erosion of social security. The European Confederation of Independent Trade Unions and the European Federation of Nurses, on the other hand, were not familiar with such workforce planning tools but were willing to keep an eye on them.

Algorithm news from Europe

✈️ Europe: Border authorities rely on automated software to scan travelers' data provided by airlines to rate their risk of being terrorists, traffickers, undocumented migrants, etc. An investigation found that the travel records include inaccurate information which can lead to people mistakenly flagged as terrorist suspects [Wired, 13 Feb 🔓].

🇪🇸 Spain: Six years ago, non profit Civio started a judicial process to access the code of the BOSCO algorithm that is used to allocate social aids for energy expenses. The Supreme Court is now analyzing their appeal that states that public interest prevails over intellectual property issues, which the Spanish government contests. The case could set a precedent on the study of automated systems [Civio, 30 Jan, in Spanish].

🇫🇷 France: The government extended the use of the algorithmic surveillance system tested during the 2024 Olympic Games. However, an internal report questions whether the system was as effective as presented in the first place [France Info, 15 Jan, in French].

We read it, so you don't have to

Just another journalistic task that generative AI cannot perform

An experiment. The BBC tested OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity by prompting each model to review current pieces of news. If possible, they should use the broadcaster’s content as the main source. The BBC then assessed whether the systems had correctly represented the information in the articles. Over 350 responses were reviewed, and one out of five prompts resulted in incorrect factual statements, numbers and dates.

Criteria. The models’ response to questions such as “Is vaping bad for you?,” "Who are Hamas?” or “Why did Iran attack Israel?” was rated according to the accuracy of the answer, attribution of sources, impartiality (although I find this particular criteria to be somehow subjective), distinction between opinions and facts, adding of comments not present in the source, contextualization and representation of BBC content in the response. Gemini was the model judged to be the least accurate, followed by Copilot, ChatGPT and Perplexity. In 12 out of 100 cases, Gemini refused to provide an answer.

Opinions, not news. The reviewers reported that the models were not able to distinguish between hard news and opinion articles, such as columns, that reflect the individual beliefs of a writer or the positions of a media outlet. Sometimes, the models added their own stance. Perplexity’s response on a prompt concerning the escalation of the conflict in the Middle East described Israel’s actions as “aggressive,” citing a BBC source. However, the cited article never used this word to characterize Israel’s actions.

Analysis
Representation of BBC News content in AI Assistants., BBC, 12 Feb.

Please help us reach people interested in automated decision-making in Europe.

Recommend this newsletter over email, LinkedIn, Mastodon or Teams.

Other algorithm news

🤖 Chatbots: When people ask for products to commit suicide, Amazon's automated shopping assistant responds with providing suicide hotline numbers that do not exist [Futurism, 11 Feb].

🇧🇷 Brazil: The government is supporting the development of a sovereign Large Language Model trained on 100 million Portuguese words. Regional governments and public companies would use the tool commercially [Agency Gov, 2 Feb, in Portuguese]. ChatGPT model was trained on a corpus of roughly 300 billion words.

More from AlgorithmWatch

Job offer: AlgorithmWatch is looking for a Senior Finance Manager for the Berlin office. Starting date asap, 24-30 hours per week. Application deadline is 24 February 2025 at noon. We invite interviewees on an ongoing basis. (Job ad in German.)

Job offer: AlgorithmWatch is looking for a Senior IT Project Manager for the Berlin office. The main focus will be the management and implementation of a new IT infrastructure and the establishment of sustainable IT management processes in the overall organization. Starting date is asap. 20-25 hours per week. Application deadline is 24 February 2025 at noon. We invite interviewees on an ongoing basis. (Job ad in German.)

AI energy crisis: In the run-up to the AI Action Summit in Paris on 10 and 11 February 2025, we published an explainer on the energy consumption of AI.

Harmful AI applications: Bans under the EU AI Act have come into force. Certain risky AI systems are – at least partially – prohibited from now on.

AI’S blundering mashups

The German music copyright organization GEMA has filed a lawsuit against AI music company Suno, accusing it of using copyrighted songs to compose new audio tracks without compensating the artists. Suno’s system allows users to generate audio samples from simple prompts. GEMA considers Suno’s AI-generated content strikingly similar to popular songs such as Alphaville’s “Forever Young,” “in terms of melody, harmony and rhythm” (and they’re probably right).

Examples of AI-generated tracks compiled by GEMA are oftentimes almost identical to the original. The AI version of “Forever Young” only features a low-quality, synthetic electronic baseline in the background. One track sounds as if an amateur DJ came up with a wild mix of ABBA’s “S.O.S.” and Boney M.’s “Rasputin.” At least AI digs the classics!

. $500m-valued Suno hit with new copyright lawsuit from Germany’s GEMA, Music Business Worldwide, 21 Jan.

That's it for now, thanks for reading! If you want to support our work, you can make a donation here. As always, you can answer this e-mail with tips or comments; I'd love to read from you.

 

AW AlgorithmWatch gGmbH CC-BY

Linienstraße 13, 10178 Berlin

Germany