Tag Archives: Artificial intelligence

With Funds Opposed by GOP, IRS to Target Ultrawealthy Tax Delinquents

“This news stands in stark contrast to the approach taken by House Republicans, who want to allow wealthy tax cheats to continue business as usual,” said U.S. Senate Finance Committee Chair Ron Wyden.

By Jessica Corbett. Published 9-8-2023 by Common Dreams

Internal Revenue Service Commissioner Daniel Werfel . Screenshot: C-SPAN

The U.S. Internal Revenue Service on Friday won praise from congressional Democrats and progressive groups for announcing “a sweeping, historic effort to restore fairness in tax compliance by shifting more attention onto high-income earners, partnerships, large corporations, and promoters abusing the nation’s tax laws.”

The IRS effort is enabled by some of the $80 billion in funding for the agency included in the Inflation Reduction Act (IRA), which President Joe Biden signed into law last year. About a quarter of that money is set to be clawed back as part of his recent deal with congressional Republicans to temporarily suspend the nation’s debt limit.

Continue reading
Share Button

How big tech and AI are putting trafficking survivors at risk

The tech industry’s privileging of ‘safety over privacy’ could get the most vulnerable killed

By Sabra Boyd. Published 6-14-2023 by openDemocracy

Ring spotlight camera’ Photo: Trusted Reviews/CC

High above the homeless camps of Seattle, in September 2022, Amazon hosted the first Tech Against Trafficking Summit. It was an elite affair. Project managers and executives from Amazon, Google, Facebook (Meta), Instagram, and Microsoft were present, as were ministers of labour from around the globe. Panellists included government leaders, law enforcement, tech executives, and NGO directors. Only two trafficking survivors made the speakers’ list.

The summit was above all a show of force. Most of the tools presented were built for law enforcement, and safety over privacy appeared to be the mantra. Only the two survivors highlighted the dangers of haphazardly collecting any and all data, a view that was generally scoffed at. Stopping traffickers by any means necessary, the non-survivors said, was more important.

Continue reading
Share Button

Experts Urge Action to Mitigate ‘Risk of Extinction From AI’

It “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” says a new statement signed by dozens of artificial intelligence critics and boosters.

By Kenny Stancil. Published 5-30-2023 by Common Dreams

A growing number of experts are calling for a pause on advanced AI development and deployment. Graphic: deepak pal/fickr/CC

On Tuesday, 80 artificial intelligence scientists and more than 200 “other notable figures” signed a statement that says “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The one-sentence warning from the diverse group of scientists, engineers, corporate executives, academics, and other concerned individuals doesn’t go into detail about the existential threats posed by AI. Instead, it seeks to “open up discussion” and “create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously,” according to the Center for AI Safety, a U.S.-based nonprofit whose website hosts the statement.

Continue reading
Share Button

Bipartisan US Bill Aims to Prevent AI From Launching Nuclear Weapons

“As we live in an increasingly digital age, we need to ensure that humans hold the power alone to command, control, and launch nuclear weapons—not robots,” said co-sponsor Sen. Ed Markey.

By Brett Wilkins Pubished 4-26-2023 by Common Dreams

U.S. Air Force Staff Sgt. Jacob Puente of the 912th Aircraft Mainenance Squadron secures an AGM-183A air-launched rapid-response hypersonic air-to-ground missile to a B-52 Stratofortress bomber at Edwards Air Force Base in Kern County, California on August 6, 2020. (Photo: Giancarlo Casem/USAF)

In the name of “protecting future generations from potentially devastating consequences,” a bipartisan group of U.S. lawmakers on Wednesday introduced legislation meant to prevent artificial intelligence from launching nuclear weapons without meaningful human control.

The Block Nuclear Launch by Autonomous Artificial Intelligence Act—introduced by Sen. Ed Markey (D-Mass.) and Reps. Ted Lieu (D-Calif.), Don Beyer (D-Va.), and Ken Buck (R-Colo.)—asserts that “any decision to launch a nuclear weapon should not be made” by AI.

Continue reading
Share Button

Experts Demand ‘Pause’ on Spread of Artificial Intelligence Until Regulations Imposed

“Businesses are deploying potentially dangerous AI tools faster than their harms can be understood or mitigated,” Public Citizen warns. “History offers no reason to believe that corporations can self-regulate away the known risks.”

By Kenny Stancil. Published 4-18-2023 by Common Dreams

Image: Mike MacKenzie/flickr/CC

“Until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause.”

So says a report on the dangers of artificial intelligence (AI) published Tuesday by Public Citizen. Titled Sorry in Advance! Rapid Rush to Deploy Generative AI Risks a Wide Array of Automated Harms, the analysis by researchers Rick Claypool and Cheyenne Hunt aims to “reframe the conversation around generative AI to ensure that the public and policymakers have a say in how these new technologies might upend our lives.”

Continue reading
Share Button

UN fails to agree on ‘killer robot’ ban as nations pour billions into autonomous weapons research

Humanitarian groups have been calling for a ban on autonomous weapons.
Wolfgang Kumm/picture alliance via Getty Images

James Dawes, Macalester College

Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.

The United Nations Convention on Certain Conventional Weapons debated the question of banning autonomous weapons at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but didn’t reach consensus on a ban. Established in 1983, the convention has been updated regularly to restrict some of the world’s cruelest conventional weapons, including land mines, booby traps and incendiary weapons. Continue reading

Share Button

UNESCO Members Adopt First Global AI Ethics Agreement ‘To Benefit Humanity’

“We’re at a critical juncture in history,” said Ethics in Tech founder Vahid Razavi. “We need as humans to come together and decide what is the best course of action to take with these technologies before they surpass us in their abilities.”

By Brett Wilkins.  Published 11-26-2021 by Common Dreams

UNESCO Director-General Audrey Azoulay speaks during a November 25, 2021 press conference announcing the adoption of a global artificial intelligence framework agreement by all 193 member states. (Photo: Christelle Alix/UNESCO/Flickr/cc)

Tech ethicists on Friday applauded after all 193 member states of the United Nations Educational, Scientific, and Cultural Organization adopted the first global framework agreement on the ethics of artificial intelligence, which acknowledges that “AI technologies can be of great service to humanity” and that “all countries can benefit from them,” while warning that “they also raise fundamental ethical concerns.”

“AI is pervasive, and enables many of our daily routines—booking flights, steering driverless cars, and personalizing our morning news feeds,” UNESCO said in a statement Thursday. “AI also supports the decision-making of governments and the private sector. AI technologies are delivering remarkable results in highly specialized fields such as cancer screening and building inclusive environments for people with disabilities. They also help combat global problems like climate change and world hunger, and help reduce poverty by optimizing economic aid.” Continue reading

Share Button

School surveillance of students via laptops may do more harm than good

School laptop surveillance systems monitor students even when they’re not in school.
Jacques Julien/Getty Images

Nir Kshetri, University of North Carolina – Greensboro

Ever since the start of the pandemic, more and more public school students are using laptops, tablets or similar devices issued by their schools.

The percentage of teachers who reported their schools had provided their students with such devices doubled from 43% before the pandemic to 86% during the pandemic, a September 2021 report shows.

In one sense, it might be tempting to celebrate how schools are doing more to keep their students digitally connected during the pandemic. The problem is, schools are not just providing kids with computers to keep up with their schoolwork. Instead – in a trend that could easily be described as Orwellian – the vast majority of schools are also using those devices to keep tabs on what students are doing in their personal lives. Continue reading

Share Button

An autonomous robot may have already killed people – here’s how the weapons could be more destabilizing than nukes

The term ‘killer robot’ often conjures images of Terminator-like humanoid robots. Militaries around the world are working on autonomous machines that are less scary looking but no less lethal.
John F. Williams/U.S. Navy

James Dawes, Macalester College

Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020. Continue reading

Share Button

Israel Defense Ministry Launches COVID-19 Voice Test for Americans

A company formed at the behest of the Israeli Ministry of Defense has begun collecting voice data from Americans to detect COVID-19 symptoms through technology with dubious diagnostic value, but highly profitable applications in law enforcement.

By Raul Diego  Published 7-2-2020 by MintPress News

The Israeli Ministry of Defense has launched a project to analyze people’s voices and breathing patterns using artificial intelligence (AI) in order to determine if they have COVID-19. The software allegedly listens for detectable “signs of distress,” ostensibly from the respiratory effects of the virus. A May 27 report in the Jerusalem Post stated that the research was already being conducted at several hospitals in Israel, where confirmed COVID-19 patients were asked to provide voice samples to be compared to those of a control group from the general population.

Results from the research were expected sometime in June. However, the study has now been expanded beyond Israel’s borders. Over one million voice recordings are currently being collected in the United States through a mobile app developed by Massachusetts-based Vocalis Health, under the auspices of the Israeli government. Continue reading

Share Button