Business Daily Media

Men's Weekly

.

AI development works better for everyone when its workforce is well looked after

  • Written by Peter Bloom, Professor of Management, University of Essex
AI development works better for everyone when its workforce is well looked after

A former CEO and executive chairman of Google, recently suggested that the tech giant’s apparent lag in AI development was due to the company prioritising employees’ personal wellbeing over progress. Eric Schmidt told an audience[1]: “Google decided that work-life balance and going home early and working from home was more important than winning.”

Schmidt later retracted[2] his statement, claiming he “misspoke”. Yet his comment reflects a common view in the tech industry – that progress is dependent on intensive work patterns and keeping a close eye on staff.

Companies such as Amazon have implemented contoversial[3] worker tracking systems. Others promote a culture of “overworking”[4] as a necessary part of innovation.

But this mindset overlooks the crucial role that an engaged and happy workforce plays in creating beneficial technology. Studies have shown[5], for example, that remote working and better work-life balance often lead to increased productivity rather than hindering progress.

History also shows that empowering workers and fostering a democratic approach has accelerated technological breakthroughs. The open-source[6] movement, where information is shared in software development, is a case in point. Wikipedia is another example – a success story built entirely on volunteer contributions and collective effort.

In AI too, there has been rapid progress with projects, which emphasise[7] openness and collaboration, such as language models similar to ChatGPT known as BLOOM[8] and GPT-J[9]. This demonstrates that democratising access to AI tools and knowledge can accelerate progress.

Meanwhile, many of the ethical challenges in AI development – from algorithmic bias to privacy concerns – stem from rushed development cycles and a lack of diverse perspectives.

For instance, racial and gender biases[10] in facial recognition systems reportedly emerged because development teams were working under pressure to deliver results quickly. The Cambridge Analytica scandal[11], which exposed the misuse of Facebook user data, illustrated the risks of prioritising growth and profit over privacy and social impact.

The drive for relentless productivity and market dominance has also led to the emergence of “digital sweatshops” – exploitative labour regimes[12] associated with AI development.

These include content moderation[13] “factories” where workers are exposed to traumatic material for long hours with minimal support (a spokesperson for Facebook’s parent company said[14] it takes its responsibility to content reviewers seriously, with “industry-leading pay, benefits and support”.) Or the data processing operations connected to machine learning where workers in low-wage countries[15] perform repetitive tasks for little reward.

Companies such as Facebook, Google and Amazon have been criticised for outsourcing[16] these crucial (but often overlooked) aspects of AI development to contractors with poor working conditions. And they highlight the human cost of rapid AI advancement, where the real motivation is often about corporate dominance[17] and maximising shareholder value.

This model also leads to innovations that fail to address broader social and ecological challenges. The substantial carbon footprint[18] associated with AI development shows the urgent necessity for more considered, sustainable methods.

Night time city skyline with digital waves and vertical lines.
Socially beneficial tech? Who is Danny/Shutterstock[19]

But these are more likely to emerge from well-treated teams of people, who are granted the autonomy to explore and address[20] the wider implications of their work. They will not come from rigid hierarchies focused solely on immediate financial returns.

Herein lies the false binary between worker power and technological advancement. The evidence suggests[21] that when executives exert too much control, the development of socially beneficial technology is hindered. They simply won’t provide what empowered workers and open collaboration can bring to the table.

Worker-led initiatives have also been at the forefront of ethical technology development. For example, Google employees’ protest against the company’s involvement with Project Maven[22], a US military AI scheme, was a success. And Amazon workers have continued to push[23] for the company to improve its environmental credentials.

Schmidt spoke of “winning” in the AI race. But what exactly is being won through techniques that prioritise corporate control and worker exploitation? Often the result is unethical technology developed under exploitative conditions – technology which serves narrow corporate interests rather than social needs.

But the future of AI and other emerging technologies should not be driven solely by market forces. Innovation does not require oppressive work conditions or excessive corporate control.

And technological progress and social progress are not mutually exclusive. In fact, they can be mutually strengthening. A truly successful AI industry should be one that produces innovative technologies in a way which empowers workers, deals with ethical considerations, and makes a positive contribution to society.

References

  1. ^ told an audience (fortune.com)
  2. ^ retracted (www.hrgrapevine.com)
  3. ^ contoversial (www.peoplemanagement.co.uk)
  4. ^ “overworking” (www.itpro.com)
  5. ^ have shown (hbr.org)
  6. ^ open-source (www.forbes.com)
  7. ^ emphasise (huggingface.co)
  8. ^ BLOOM (www.technologyreview.com)
  9. ^ GPT-J (www.eleuther.ai)
  10. ^ racial and gender biases (news.mit.edu)
  11. ^ Cambridge Analytica scandal (www.nytimes.com)
  12. ^ exploitative labour regimes (www.bloomsbury.com)
  13. ^ content moderation (time.com)
  14. ^ said (time.com)
  15. ^ low-wage countries (www.ft.com)
  16. ^ outsourcing (www.theguardian.com)
  17. ^ corporate dominance (projects.itforchange.net)
  18. ^ carbon footprint (www.technologyreview.com)
  19. ^ Who is Danny/Shutterstock (www.shutterstock.com)
  20. ^ explore and address (hbr.org)
  21. ^ suggests (www.forbes.com)
  22. ^ Project Maven (www.nytimes.com)
  23. ^ push (www.bbc.com)

Read more https://theconversation.com/ai-development-works-better-for-everyone-when-its-workforce-is-well-looked-after-240079

Leonardo.Ai reveals new brand, expanding its creator-first platform for the next era of generative AI

The company has also launched its developer API to empower creators and builders to integrate AI into their workflows SYDNEY, Australia – 19 Febr...

Psychosocial injury risk starts inside workplace microcultures

Psychological injury is now one of the most expensive categories of workers compensation claims in Australia, with Safe Work Australia reporting t...

2025 Thryv Business and Consumer Report - Australian small businesses show grit under pressure

Australia’s small businesses are powering ahead with optimism, resilience and discipline, however, mounting pressures on costs, wellbeing and cons...

Security by Default: Why 2026 Will Force Organisations to Rethink Cloud and AI

financial accountability to how they run cloud and AI, according to leading Australian systems integrator, Brennan. Based on customer insights...

UNSW launches plan to help Aussie startups scale overseas

UNSW Launches Global Innovation Foundry to Scale 100 Australian Startups Internationally New initiative provides startups and spinouts with direc...

Payroll Under Pressure: Why Mid-Sized SMEs Struggle to Keep Pay Accurate

A year after wage theft reforms came into effect, Australian businesses have increased their focus on payroll compliance, but confidence in pay accu...