SHey cheese! Last week’s decision by Bunnings to greenlight the use of facial recognition technology to routinely track customers signals how unprepared Australia is for the coming AI storm.
On the face of it, the Administrative Review Tribunal’s decision to reject the Privacy Commissioner’s finding that Bunnings’ use of intrusive, high-impact AI was unlawful is a technical call. But the impact will be physical.
Expect retailers and others working in public places scale up Capturing our biometric information, matching it against massive and often inaccurate external databases to make real-time decisions about whether we can access them used to be common.
Bunnings argues that it was acting on concerns about violence in stores, but it chose this crude technological solution that dehumanizes customers and staff in the name of their safety.
Secretly tracking customers The marketplace has changed from a place of human interaction to an automated checkout, forcing customers and employees to keep an eye on them. Do not use body cam To document the inevitable reaction.
Our antiquated privacy laws, which haven’t seen any serious change in 40 years, are a feature of this dystopian cycle of automation, not a bug, and are critical of the broader direction big tech is taking us.
Significant privacy reform was a priority of former Attorney General Mark Dreyfus, and he managed to push a modest round of changes focused on children’s privacy through Parliament before falling victim to internal factional intrigue following the 2025 election landslide.
The second round of changes they propose should not be controversial: expanding the definitions of “personal information” to include our digital footprint; Eliminating “tick a box” consent to collect and exploit personal information, giving people the right to access and erase our digital footprint – and requiring greater scrutiny of high-impact AI like facial recognition.
know from us regular voting that the public strongly supports these changes; The challenge has always been to find a way to use this support to counter the powerful vested interests that oppose meaningful change in the sector.
The list of those seeking exemptions to the bill is long; Small businesses, the media and political parties all argue their particular case, while the growing list of businesses running on the extraction of our data will do everything they can to protect their patch.
The progress (or otherwise) of these privacy reforms will be an early test of the government’s broader approach to AI, embodied in its light-hearted National AI Plan, which prioritizes bespoke guardrails and updates to existing laws over redlining in the name of “productivity”.
This strategy may seem simple on paper, but with the laws fragmented and spanning multiple departments, the risk is that each legislative battle will pit the cash-strapped tech sector and vested industry interests against under-resourced parts of civil society.
A look at the work needed to prepare for the coming wave includes, but is not limited to, the following: copyright (As the tech industry resumes its bid to legalize the theft of creative output), online security (As AI unleashes “nudify” apps on the market and sexes-up AI companions with apparent impunity), consumer Protection (automation of particularly sophisticated scams), workplace law (so that workers can understand how their labor is being used to drive the models that will replace them), and an online duty of care (To hold those deploying technology accountable for the impact of their devices).
Just listing these challenges is exhausting, but what is even more worrying is that there is no sequenced, integrated approach to listing these issues under a national AI plan and creating a common set of principles for how they can be addressed.
That’s why the foundation of any effective societal agreement with AI must be a strong set of privacy principles that establish ground rules on the collection and trading of our personal information from the web and the exploration of observed behavior – both online and in the real world.
If the AI revolution fulfills even a fraction of its hype, the disruptions we face as workers, consumers, and citizens will be severe. Certainly it is not too much to ask for the government to take ownership and give us a collective voice in whatever happens next.
Which brings us back to Bunnings and the model of routine civilian surveillance that it normalizes.
We are told that we need to accelerate the AI race to defeat the repressive Chinese model of constant surveillance and social credit, while similar identification technology is being deployed by ICE agents in the United States and the Israeli military in Gaza.
Poignantly, privacy laws were one of the principles that emerged from the horror of the Holocaust, followed by the recognition of the dangers inherent in classifying and centralizing information about citizens. How did IBM emerge? Served the Nazi regime.
In fact, the first consumer privacy laws were enacted in West Germany in the 1960s and the unified nation was the driving force behind them. eu gdprWhich are closest to a balanced set of principles for dealing with the information age.
Long before the advent of social media algorithms and language models built on the illegal exploitation of our creativity, it was believed that we needed havens where we weren’t monitored and could just be ourselves.
This space has been slowly eroded, but the pace of its eradication is now accelerating – from documentation that occasionally confirms an identity, to a living digital footprint that is traded between companies, to a unique biometric profile that will pre-empt and shape our every move.
Without strong privacy protections we are all guinea pigs in a real-time experiment to improve our personal lives. If we ever have to draw a line in the sand, we have to draw it now. Bunnings can provide sausages.
