Perspective as a service from a raging generalist
78 stories
·
0 followers

Elections 2024: America at the End of Its Tether

1 Comment
A big source of election anxiety and conflict is that both parties give only lip service to rising inequality and fraying social safety nets.
Read the whole story
Jakel1828
1 day ago
reply
Mood
DFW
Share this story
Delete

Please just stop saying "just" (2019)

1 Share
Comments
Read the whole story
Jakel1828
2 days ago
reply
DFW
Share this story
Delete

The Human Toll of ALPR Errors

1 Comment

This post was written by Gowri Nayar, an EFF legal intern.

Imagine driving to get your nails done with your family and all of a sudden, you are pulled over by police officers for allegedly driving a stolen car. You are dragged out of the car and detained at gun point. So are your daughter, sister, and nieces. The police handcuff your family, even the children, and force everyone to lie face-down on the pavement, before eventually realizing that they made a mistake. This happened to Brittney Gilliam and her family on a warm Sunday in Aurora, Colorado, in August 2020.

And the error? The police officers who pulled them over were relying on information generated by automated license plate readers (ALPRs). These are high-speed, computer-controlled camera systems that automatically capture all license plate numbers that come into view, upload them to a central server, and compare them to a “hot list” of vehicles sought by police. The ALPR system told the police that Gilliam’s car had the same license plate number as a stolen vehicle. But the stolen vehicle was a motorcycle with Montana plates, while Gilliam’s vehicle was an SUV with Colorado plates.

Likewise, Denise Green had a frightening encounter with San Francisco police officers late one night in March of 2009. She had just dropped her sister off at a BART train station, when officers pulled her over because their ALPR indicated that she was driving a stolen vehicle. Multiple officers ordered her to exit her vehicle, at gun point, and kneel on the ground as she was handcuffed. It wasn’t until roughly 20 minutes later that the officers realized they had made an error and let her go.

Turns out that the ALPR had misread a ‘3’ as a ‘7’ on Green’s license plate. But what is even more egregious is that none of the officers bothered to double-check the ALPR tip before acting on it.

In both of these dangerous episodes, the motorists were Black.  ALPR technology can exacerbate our already discriminatory policing system, among other reasons because too many police officers react recklessly to information provided by these readers.

Wrongful detentions like these happen all over the country. In Atherton, California, police officers pulled over Jason Burkleo on his way to work, on suspicion of driving a stolen vehicle. They ordered him at gun point to lie on his stomach to be handcuffed, only to later realize that their license plate reader had misread an ‘H’ for an ‘M’. In Espanola, New Mexico, law enforcement officials detained Jaclynn Gonzales at gun point and placed her 12 year-old sister in the back of a patrol vehicle, before discovering that the reader had mistaken a ‘2’ for a ‘7’ on their license plates. One study found that ALPRs misread the state of 1-in-10 plates (not counting other reading errors).

Other wrongful stops result from police being negligent in maintaining ALPR databases. Contra Costa sheriff’s deputies detained Brian Hofer and his brother on Thanksgiving day in 2019, after an ALPR indicated his car was stolen. But the car had already been recovered. Police had failed to update the ALPR database to take this car off the “hot list” of stolen vehicles for officers to recover.

Police over-reliance on ALPR systems is also a problem. Detroit police knew that the vehicle used in a shooting was a Dodge Charger. Officers then used ALPR cameras to find the license plate numbers of all Dodge Chargers in the area around the time. One such car, observed fully two miles away from the shooting, was owned by Isoke Robinson.  Police arrived at her house and handcuffed her, placed her 2-year old son in the back of their patrol car, and impounded her car for three weeks. None of the officers even bothered to check her car’s fog lights, though the vehicle used for the  shooting had a missing fog light.

Officers have also abused ALPR databases to obtain information for their own personal gain, for example, to stalk an ex-wife. Sadly, officer abuse of police databases is a recurring problem.

Many people subjected to wrongful ALPR detentions are filing and winning lawsuits. The city of Aurora settled Brittney Gilliam’s lawsuit for $1.9 million. In Denise Green’s case, the city of San Francisco paid $495,000 for her seizure at gunpoint, constitutional injury, and severe emotional distress. Brian Hofer received a $49,500 settlement.

While the financial costs of such ALPR wrongful detentions are high, the social costs are much higher. Far from making our communities safer, ALPR systems repeatedly endanger the physical safety of innocent people subjected to wrongful detention by gun-wielding officers. They lead to more surveillance, more negligent law enforcement actions, and an environment of suspicion and fear.

Since 2012, EFF has been resisting the safety, privacy, and other threats of ALPR technology through public records requests, litigation, and legislative advocacy. You can learn more at our Street-Level Surveillance site.



Read the whole story
Jakel1828
4 days ago
reply
Stories like this show why the 'human-in-the-loop' concept is a fallacy.

And every time I read a case like this, I regret having written about the human in the loop in terms of augmented intelligence for a former employer's blog.

The truth is that humans are rarely in the loops, either because they're lazy or due to workplace bureaucracy that tells them to follow orders rather than think for themselves.
DFW
Share this story
Delete

Wearing the costume

1 Share

There’s a huge difference between carrying a stethoscope and being a doctor.

And being a clown requires far more than getting a clown suit.

Entrepreneurs with business cards, slick websites and mission statements are confused. That’s not the hard part.

If the costume puts you in the right frame of mind, that’s great. But the hard part is the important part.

Can you list the parts that matter? (hint: they might be the parts you’re avoiding.)

Read the whole story
Jakel1828
6 days ago
reply
DFW
Share this story
Delete

Geoffrey Hinton’s misguided views on AI

1 Comment
Geoffrey Hinton’s misguided views on AI

It will probably come as no surprise to you that I’m no big fan of the so-called “godfather of AI” Geoffrey Hinton, and it’s fair to say I was stunned when he was given a Nobel Prize in Physics — as he seems to have been as well. Not long after that announcement was made, I was asked to write a quick piece about it for the Toronto Star, and they’ve allowed me to share it with you.

I think the perspective on AI that Hinton shares — which is often charitably termed an “AI safety” perspective (or, less charitably, he’s a doomer) — is very unhelpful in actually dealing with the realities and potential near futures of AI — the harms to workers and the wider society that have nothing to do with the sci-fi dream of superintelligence. But I do want to say something positive about him.

Hinton joined the University of Toronto in 1987 and is a Canadian citizen. He’s seen a lot that’s happening in Canada over the past several decades. Earlier this week, he revealed that he donated half of his share of the $1.45 million CAD in prize money from the Nobel Committee to Water First, an organization in Ontario training Indigenous peoples to develop safe water systems.

In recent years, Canada has been facing a reckoning for the cultural genocide it inflected on Indigenous peoples within its borders, from the lack of clean drinking water to the horrors of the residential schools. At a news conference, Hinton said, “I think it’s great that they’re recognizing (who lived on the land first), but it doesn’t stop Indigenous kids getting diarrhea.” He may be misguided on AI, but good on him for that.

Now, here’s my piece on Hinton’s Nobel Prize, first published by the Toronto Star.


In the mid-1960s, MIT computer scientist Joseph Weizenbaum developed a program called ELIZA. It was a more rudimentary form of a chatbot like ChatGPT, designed to simulate a psychotherapist. Upon seeing how people engaged with it, however, Wiezenbaum’s optimism toward the technology soured.

The program had no understanding of what users were inputting. Even still, Weizenbaum found that people wanted to believe it did. His secretary even asked him to leave the room as she responded to the system’s questions. Today, researchers call this the ELIZA effect: projecting human traits onto computer programs and overestimating their capabilities as a result.

That phenomenon came to mind recently when I heard the news that Geoffrey Hinton was being honoured with the 2024 Nobel Prize in Physics alongside John Hopfield. While Hinton certainly helped move his field forward, his assertions about the risks of artificial intelligence could distract us from the real consequences.

You’ve likely heard Hinton referred to as the “godfather of AI.” His work has been key to the development of neural networks and the algorithms that form the basis of chatbots like ChatGPT. Hinton is a professor emeritus at the University of Toronto and would split his time between the university and Google until he resigned from the company in May 2023.

There’s no doubting that Hinton has made important contributions to his field. But since the rise of generative AI at the end of 2023, Hinton has become known in tech circles for another reason: he promotes the idea that AI systems are nearing human levels of intelligence, and that they therefore pose a threat to human survival. He is not alone.

But there are also a large number of researchers who push back on that idea and charge he’s guilty of falling prey to the ELIZA effect.

Hinton asserts that since artificial neural networks were modelled on biological brains, they must then work similarly to them. That means a tool like ChatGPT isn’t just using complex algorithms to churn out believable results; it’s actually developed a level of understanding that will continue to grow until it exceeds the intelligence of human beings. He says this would mark an “existential threat” to humanity – despite acknowledging to the BBC that as recently as a few years ago most experts “thought it was just science fiction.”

But that’s still the case today.

After theories like Hinton’s started gaining more traction as the hype around ChatGPT grew, science fiction author Ted Chiang started criticizing the excitement, calling the technology “autocomplete on steroids.” Emily M. Bender, a computational linguist at the University of Washington, has similarly called out people like Hinton for conflating a chatbot’s ability to churn out text with the notion there’s any meaning behind it.

Put more plainly: things like ChatGPT only appear to be intelligent because they’ve been designed mimic human language to a plausible enough degree. Their creators want to believe they’re creating intelligent machines, so that’s what they choose to see.

When I spoke to Bender last year, she told me that people like Hinton “would rather think about this imaginary sci-fi villain that they can be fighting against, rather than looking at their own role in what’s going on in harms right now.” AI models present plenty of concerns beyond the supposedly existential and science fictional ones Hinton is most preoccupied with, including everything from their environmental costs to how they’re already being deployed against marginalized populations today. But when CNN asked Hinton about those concerns in May 2023, he said they “weren’t as existentially serious” and thus not as worthy of his time.

For his contributions to his field, Hinton deserves recognition, and he’s received plenty of it. But just because he’s excelled at advancing AI models doesn’t mean we also need to turn to him for answers to the questions about their broader societal consequences. Hinton may be an intelligent man, but we shouldn’t assume the same about the technology he helped create.

Read the whole story
Jakel1828
6 days ago
reply
This is another example of hype masking legitimate concerns.
DFW
Share this story
Delete

Vanishing Culture: A Report on Our Fragile Cultural Record

1 Comment

Download
Vanishing Culture: A Report on Our Fragile Cultural Record

In today’s digital landscape, corporate interests, shifting distribution models, and malicious cyber attacks are threatening public access to our shared cultural history.

  • The rise of streaming platforms and temporary licensing agreements means that sound recordings, books, films, and other cultural artifacts that used to be owned in physical form, are now at risk—in digital form—of disappearing from public view without ever being archived.
  • Cyber attacks, like those against the Internet Archive, British Library, Seattle Public Library, Toronto Public Library and Calgary Public Library, are a new threat to digital culture, disrupting the infrastructure that secures our digital heritage and impeding access to information at community scale.

When digital materials are vulnerable to sudden removal—whether by design or by attack—our collective memory is compromised, and the public’s ability to access its own history is at risk. 

Vanishing Culture: A Report on Our Fragile Cultural Record (download) aims to raise awareness of these growing issues. The report details recent instances of cultural loss, highlights the underlying causes, and emphasizes the critical role that public-serving libraries and archives must play in preserving these materials for future generations. By empowering libraries and archives legally, culturally, and financially, we can safeguard the public’s ability to maintain access to our cultural history and our digital future.

Download
Vanishing Culture: A Report on Our Fragile Cultural Record

Read the whole story
Jakel1828
7 days ago
reply
When the internet was first taking off, we thought everything would be permanent. It'll all be stored on a server, where storage is cheap and indexing is easy.

We're now seeing that the digital world is temporary after all.
DFW
Share this story
Delete
Next Page of Stories