A look into Corporate fraud in Australia, Stranglehold of Monopolies, Telecom's Oppression, Biased Law System, Corporate influence in politics, Industrial Relations disadvantaging workers, Outsourcing Australian Jobs, Offshore Banking, Petrochemical company domination, Invisibly Visible.
It's not what you see, it's what goes on behind the scenes. Australia, the warrantless colony.
Note: Site has more info in desktop mode or 'web version' as seen at bottom of page, when on smartphone.
COMMONWEALTH OF AUSTRALIA (ABN: 122 104 616)
Australia's Prime Minister (CEO) Tony Abbott : "Australia is Open for Business"
They lie to the people every single day, you just have to catch them out!
Evidence shows opposite of Dutton's claims on Cashless Card and family violence rates, advocates say
The Cashless Debit Card was heavily criticised when the coalition
government began trials in 2016, with advocates saying there's data the
policy led to increases in violence.
Advocates have hit back at Opposition leader Peter Dutton's claims
that the abolishment of the cashless debit card would lead to more
family violence in Aboriginal communities.
While visiting Adelaide, Mr Dutton spoke to reporters about the
government’s push to abolish the cashless debit card (CDC), an
initiative for the former coalition government's welfare reform.
The policy quarantined a portion of a person’s welfare income, and it
could not be used to buy alcohol, gambling products, or for withdrawing
cash.
Dutton claimed the removal of the card would see rates of violence
increase in Indigenous communities, in particular, "against women and
children," and prompted the government to focus place its focus on the
issue.
Takes power away'
Change the Record Co-Chair and Djirra CEO, Antionette Braybrook hit
back Mr Dutton’s claims linking the abolition of the card to an increase
in violence.
Instead, saying that the card “takes power away from Aboriginal women”.
“It particularly takes power away from Aboriginal womenwho
need financial control and freedom to make choices that are in their
best interests and the best interests of their children. That is
self-determination,” she said.
The Kuku Yalanji woman has been on the frontline of family violence
prevention for two decades. On Monday, she gave evidence to the CDC
inquiry.
She told NITV News that Mr Dutton was incorrect in his claims.
“Contrary to what Mr Dutton has claimed, there is substantial
evidence that domestic violence police call-outs increased significantly
after the introduction of the Cashless Debit Card - not decreased,” she
said.
“Independent studies across four cashless welfare sites found
an increase in tensions and fighting within households due to the
additional financial pressure that the Cashless Debit Card placed on
families, and increased risk to women fleeing family violence as a
result of having less available money at their disposal.”
Ms Braybrook said the solutions to family violence was not in
government control of money of choices but rather in empowering First
Nations women.
"The solution is to provide culturally safe, trauma-informed,
wrap-around services to support women to find safety and take control
over their own lives and decisions," she said.
"Aboriginal and Torres Strait Islander women are ready to take
control of their own lives no matter where they live, and we just need
to make sure that the appropriate services are there for our woman to do
just that.”
Security researchers have discovered over 80,000 Hikvision cameras
vulnerable to a critical command injection flaw that's easily
exploitable via specially crafted messages sent to the vulnerable web
server.
The flaw is tracked as CVE-2021-36260 and was addressed by Hikvision via a firmware update in September 2021.
However, according to a whitepaper published by CYFIRMA, tens of
thousands of systems used by 2,300 organizations across 100 countries
have still not applied the security update.
There have been two known public exploits for CVE-2021-36260, one published in October 2021 and the second in February 2022, so threat actors of all skill levels can search for and exploit vulnerable cameras.
In December 2021, a Mirai-based botnet called 'Moobot' used the particular exploit to spread aggressively and enlist systems into DDoS (distributed denial of service) swarms.
In January 2022, CISA alerted that CVE-2021-36260 was among the actively exploited bugs
in the then published list, warning organizations that attackers could
"take control" of devices and to patch the flaw immediately.
Vulnerable and exploited
CYFIRMA says Russian-speaking hacking forums often sell network
entrance points relying on exploitable Hikvision cameras that can be
used either for "botnetting" or lateral movement.
Of an analyzed sample of 285,000 internet-facing Hikvision web
servers, the cybersecurity firm found roughly 80,000 still vulnerable to
exploitation.
Most of these are located in China and the United States, while
Vietnam, the UK, Ukraine, Thailand, South Africa, France, the
Netherlands, and Romania all count above 2,000 vulnerable endpoints.
While the exploitation of the flaw doesn't follow a specific pattern
right now, since multiple threat actors are involved in this endeavor,
CYFIRMA underlines the cases of the Chinese hacking groups APT41 and
APT10, as well as Russian threat groups specializing in cyberespionage.
An example they give is a cyberespionage campaign named "think
pocket," which has been targeting a popular connectivity product used in
an array of industries across the globe since August 2021.
"From an External Threat Landscape Management (ETLM) analogy,
cybercriminals from countries that may not have a cordial relation with
other nations could use the vulnerable Hikvision camera products to
launch a geopolitically motivated cyber warfare," explains CYFIRMA in the whitepaper.
Weak passwords also a problem
Apart from the command injection vulnerability, there's also the
issue of weak passwords that users set for convenience or that come with
the device by default and aren't reset during the first set up.
Bleeping Computer has spotted multiple offerings of lists, some even
free, containing credentials for Hikvision camera live video feeds on
clearnet hacking forums.
If you operate a Hikvision camera, you should make it a priority to install the latest available firmware update, use a strong password, and isolate the IoT network from critical assets using a firewall or VLAN.
The trend of our gadgets and infrastructure constantly, often
invasively, monitoring their users shows little sign of slowing — not
when there's so much money to be made.
Of course it hasn't been all bad for humanity, what with AI's help in
advancing medical, communications and logistics tech in recent years. In
his new book, Machines Behaving Badly: The Morality of AI,
Scientia Professor of Artificial Intelligence at the University of New
South Wales, Dr. Toby Walsh, explores the duality of potential that
artificial intelligence/machine learning systems offer and, in the
excerpt below, how to claw back a bit of your privacy from an industry
built for omniscience.
The Second Law of Thermodynamics states that the total entropy of a
system – the amount of disorder – only ever increases. In other words,
the amount of order only ever decreases. Privacy is similar to entropy.
Privacy is only ever decreasing. Privacy is not something you can take
back. I cannot take back from you the knowledge that I sing Abba songs
badly in the shower. Just as you can’t take back from me the fact that I
found out about how you vote.
There are different forms of privacy. There’s our digital online
privacy, all the information about our lives in cyberspace. You might
think our digital privacy is already lost. We have given too much of it
to companies like Facebook and Google. Then there’s our analogue offline
privacy, all the information about our lives in the physical world. Is
there hope that we’ll keep hold of our analogue privacy?
The problem is that we are connecting ourselves, our homes and our
workplaces to lots of internet-enabled devices: smartwatches, smart
light bulbs, toasters, fridges, weighing scales, running machines,
doorbells and front door locks. And all these devices are
interconnected, carefully recording everything we do. Our location. Our
heartbeat. Our blood pressure. Our weight. The smile or frown on our
face. Our food intake. Our visits to the toilet. Our workouts.
These devices will monitor us 24/7, and companies like Google and
Amazon will collate all this information. Why do you think Google bought
both Nest and Fitbit recently? And why do you think Amazon acquired two
smart home companies, Ring and Blink Home, and built their own
smartwatch? They’re in an arms race to know us better.
The benefits to the companies our obvious. The more they know about
us, the more they can target us with adverts and products. There’s one
of Amazon’s famous ‘flywheels’ in this. Many of the products they will
sell us will collect more data on us. And that data will help target us
to make more purchases.
The benefits to us are also obvious. All this health data can help
make us live healthier. And our longer lives will be easier, as lights
switch on when we enter a room, and thermostats move automatically to
our preferred temperature. The better these companies know us, the
better their recommendations will be. They’ll recommend only movies we
want to watch, songs we want to listen to and products we want to buy.
But there are also many potential pitfalls. What if your health
insurance premiums increase every time you miss a gym class? Or your
fridge orders too much comfort food? Or your employer sacks you because
your smartwatch reveals you took too many toilet breaks?
With our digital selves, we can pretend to be someone that we are
not. We can lie about our preferences. We can connect anonymously with
VPNs and fake email accounts. But it is much harder to lie about your
analogue self. We have little control over how fast our heart beats or
how widely the pupils of our eyes dilate.
We’ve already seen political parties manipulate how we vote based on
our digital footprint. What more could they do if they really understood
how we respond physically to their messages? Imagine a political party
that could access everyone’s heartbeat and blood pressure. Even George
Orwell didn’t go that far.
Worse still, we are giving this analogue data to private companies
that are not very good at sharing their profits with us. When you send
your saliva off to 23AndMe for genetic testing, you are giving them
access to the core of who you are, your DNA. If 23AndMe happens to use
your DNA to develop a cure for a rare genetic disease that you possess,
you will probably have to pay for that cure. The 23AndMe terms and
conditions make this very clear:
You understand that by providing any sample, having your
Genetic Information processed, accessing your Genetic Information, or
providing Self-Reported Information, you acquire no rights in any
research or commercial products that may be developed by 23andMe or its
collaborating partners. You specifically understand that you will not
receive compensation for any research or commercial products that
include or result from your Genetic Information or Self-Reported
Information.
A Private Future
How, then, might we put safeguards in place to preserve our privacy
in an AI-enabled world? I have a couple of simple fixes. Some regulatory
and could be implemented today. Others are technological and are
something for the future, when we have AI that is smarter and more
capable of defending our privacy.
The technology companies all have long terms of service and privacy
policies. If you have lots of spare time, you can read them. Researchers
at Carnegie Mellon University calculated that the average internet user
would have to spend 76 work days each year just to read all the things
that they have agreed to online. But what then? If you don’t like what
you read, what choices do you have?
All you can do today, it seems, is log off and not use their service.
You can’t demand greater privacy than the technology companies are
willing to provide. If you don’t like Gmail reading your emails, you
can’t use Gmail. Worse than that, you’d better not email anyone with a
Gmail account, as Google will read any emails that go through the Gmail
system.
So here’s a simple alternative. All digital services must provide four changeable levels of privacy.
Level 1: They keep no information about you beyond your username, email and password.
Level 2: They keep information on you to provide you with a better service, but they do not share this information with anyone.
Level 3: They keep information on you that they may share with sister companies.
Level 4: They consider the information that they collect on you as public.
And you can change the level of privacy with one click from the
settings page. And any changes are retrospective, so if you select Level
1 privacy, the company must delete all information they currently have
on you, beyond your username, email and password. In addition, there’s a
requirement that all data beyond Level 1 privacy is deleted after three
years unless you opt in explicitly for it to be kept. Think of this as a
digital right to be forgotten.
I grew up in the 1970s and 1980s. My many youthful transgressions
have, thankfully, been lost in the mists of time. They will not haunt me
when I apply for a new job or run for political office. I fear,
however, for young people today, whose every post on social media is
archived and waiting to be printed off by some prospective employer or
political opponent. This is one reason why we need a digital right to be
forgotten.
More friction may help. Ironically, the internet was invented to
remove frictions – in particular, to make it easier to share data and
communicate more quickly and effortlessly. I’m starting to think,
however, that this lack of friction is the cause of many problems. Our
physical highways have speed and other restrictions. Perhaps the
internet highway needs a few more limitations too?
One such problem is described in a famous cartoon: ‘On the internet,
no one knows you’re a dog.’ If we introduced instead a friction by
insisting on identity checks, then certain issues around anonymity and
trust might go away. Similarly, resharing restrictions on social media
might help prevent the distribution of fake news. And profanity filters
might help prevent posting content that inflames.
On the other side, other parts of the internet might benefit from
fewer frictions. Why is it that Facebook can get away with behaving
badly with our data? One of the problems here is there’s no real
alternative. If you’ve had enough of Facebook’s bad behaviour and log
off – as I did some years back – then it is you who will suffer most.
You can’t take all your data, your social network, your posts, your
photos to some rival social media service. There is no real competition.
Facebook is a walled garden, holding onto your data and setting the
rules. We need to open that data up and thereby permit true competition.
For far too long the tech industry has been given too many freedoms.
Monopolies are starting to form. Bad behaviours are becoming the norm.
Many internet businesses are poorly aligned with the public good.
Any new digital regulation is probably best implemented at the level
of nation-states or close-knit trading blocks. In the current climate of
nationalism, bodies such as the United Nations and the World Trade
Organization are unlikely to reach useful consensus. The common values
shared by members of such large transnational bodies are too weak to
offer much protection to the consumer.
The European Union has led the way in regulating the tech sector. The
General Data Protection Regulation (GDPR), and the upcoming Digital
Service Act (DSA) and Digital Market Act (DMA) are good examples of
Europe’s leadership in this space. A few nation-states have also started
to pick up their game. The United Kingdom introduced a Google tax in
2015 to try to make tech companies pay a fair share of tax. And shortly
after the terrible shootings in Christchurch, New Zealand, in 2019, the
Australian government introduced legislation to fine companies up to 10
per cent of their annual revenue if they fail to take down abhorrent
violent material quickly enough. Unsurprisingly, fining tech companies a
significant fraction of their global annual revenue appears to get
their attention.
It is easy to dismiss laws in Australia as somewhat irrelevant to
multinational companies like Google. If they’re too irritating, they can
just pull out of the Australian market. Google’s accountants will
hardly notice the blip in their worldwide revenue. But national laws
often set precedents that get applied elsewhere. Australia followed up
with its own Google tax just six months after the United Kingdom.
California introduced its own version of the GDPR, the California
Consumer Privacy Act (CCPA), just a month after the regulation came into
effect in Europe. Such knock-on effects are probably the real reason
that Google has argued so vocally against Australia’s new Media
Bargaining Code. They greatly fear the precedent it will set.
That leaves me with a technological fix. At some point in the future,
all our devices will contain AI agents helping to connect us that can
also protect our privacy. AI will move from the centre to the edge, away
from the cloud and onto our devices. These AI agents will monitor the
data entering and leaving our devices. They will do their best to ensure
that data about us that we don’t want shared isn’t.
We are perhaps at the technological low point today. To do anything
interesting, we need to send data up into the cloud, to tap into the
vast computational resources that can be found there. Siri, for
instance, doesn’t run on your iPhone but on Apple’s vast servers. And
once your data leaves your possession, you might as well consider it
public. But we can look forward to a future where AI is small enough and
smart enough to run on your device itself, and your data never has to
be sent anywhere.
This is the sort of AI-enabled future where technology and regulation
will not simply help preserve our privacy, but even enhance it.
Technical fixes can only take us so far. It is abundantly clear that we
also need more regulation. For far too long the tech industry has been
given too many freedoms. Monopolies are starting to form. Bad behaviours
are becoming the norm. Many internet businesses are poorly aligned with
the public good.
Digital regulation is probably best implemented at the level of
nation-states or close-knit trading blocks. In the current climate of
nationalism, bodies such as the United Nations and the World Trade
Organization are unlikely to reach useful consensus. The common values
shared by members of such large transnational bodies are too weak to
offer much protection to the consumer.
The European Union has led the way in regulating the tech sector. The
General Data Protection Regulation (GDPR), and the upcoming Digital
Service Act (DSA) and Digital Market Act (DMA) are good examples of
Europe’s leadership in this space. A few nation-states have also started
to pick up their game. The United Kingdom introduced a Google tax in
2015 to try to make tech companies pay a fair share of tax. And shortly
after the terrible shootings in Christchurch, New Zealand, in 2019, the
Australian government introduced legislation to fine companies up to 10
per cent of their annual revenue if they fail to take down abhorrent
violent material quickly enough. Unsurprisingly, fining tech companies a
significant fraction of their global annual revenue appears to get
their attention.
It is easy to dismiss laws in Australia as somewhat irrelevant to
multinational companies like Google. If they’re too irritating, they can
just pull out of the Australian market. Google’s accountants will
hardly notice the blip in their worldwide revenue. But national laws
often set precedents that get applied elsewhere. Australia followed up
with its own Google tax just six months after the United Kingdom.
California introduced its own version of the GDPR, the California
Consumer Privacy Act (CCPA), just a month after the regulation came into
effect in Europe. Such knock-on effects are probably the real reason
that Google has argued so vocally against Australia’s new Media
Bargaining Code. They greatly fear the precedent it will set.