Site icon The News Guy

here are the technologies changing the face of scams in Hong Kong

here are the technologies changing the face of scams in Hong Kong

Scams powered by new technologies are on the rise and as a global finance hub, Hong Kong has proven particularly vulnerable.

The city’s police force recorded 16,182 technology-related criminal cases in the first half of the year, a 3.5 per cent increase over the same period in 2023. Losses in these cases amounted to HK$2.66 billion (US$341.1 million), according to Police Chief Superintendent Raymond Lam Cheuk-ho.

But how have scammers been able to perpetrate ever more convincing fraud? Here are the top technologies fuelling the fraud boom:

Do you have questions about the biggest topics and trends from around the world? Get the answers with SCMP Knowledge, our new platform of curated content with explainers, FAQs, analyses and infographics brought to you by our award-winning team.

Deepfakes – which use generative artificial intelligence (AI) to create videos, images or audio of a person’s likeness – are becoming increasingly hard to tell apart from real people.

With the explosion of corporate AI adoption, deepfake tools are cheaper and more accessible than ever, making it easy for criminals with little to no technical background to pull off sophisticated scams.

Deepfakes have become a global headache with a skyrocketing number of cases being reported. In the first quarter of this year, there was a 245 per cent year-on-year increase in deepfake cases detected by Sumsub, an identity verification provider.

Hong Kong Police have recorded three cases related to the technology and discovered 21 clips using deepfakes to impersonate government officials or celebrities on the internet since last year, Hong Kong security chief Chris Tang said in response to a lawmaker’s inquiry in June.

Deepfakes of celebrities have increasingly been used to fool people online. The Hong Kong Securities and Future Commission (SFC) earlier this year warned of a scam using deepfakes of Elon Musk touting a cryptocurrency trading platform called “Quantum AI”. Photo: Screengrab alt=Deepfakes of celebrities have increasingly been used to fool people online. The Hong Kong Securities and Future Commission (SFC) earlier this year warned of a scam using deepfakes of Elon Musk touting a cryptocurrency trading platform called “Quantum AI”. Photo: Screengrab>

Among the three deepfake cases, one involved the loss of HK$200 million when a Hong Kong employee of multinational design and engineering firm Arup was fooled in a video conference call. Everyone else on the call, including a person who appeared to be the chief financial officer, was an impersonator. Publicly available video and audio was all fraudsters needed to create the ruse.

Story continues

Deepfakes go beyond just generating someone else’s likeness in a video. They can be used to create convincing but fraudulent documents and biometric data.

Hong Kong Police cracked down on a fraud syndicate that sent more than 20 online loan applications that used deepfake technologies to bypass the online application process. One of the applications for a HK$70,000 loan was approved.

Just as these tools are making scams harder for people to detect, the technology can also be used to fight back. Deepfake Inspector from American-Japanese cybersecurity firm Trend Micro, for example, analyses images for noise or colour discrepancies to identify deepfakes in live video calls.

Everyone is familiar with classic examples of identity theft, which typically involves government ID numbers, credit card numbers or biological information, often used to commit fraud. The theft of digital identities is similar in that it allows fraudsters to impersonate others within computer networks, but in some cases it can be even more insidious than traditional ID theft.

Digital identities are software and algorithms that are used as proof of a person’s or machine’s identity online. Think of persistent cookies that keep a user logged into platforms such as Google and Facebook or an application programming interface (API) key. Stealing that information can allow a malicious actor to appear as someone with authorised access.

CyberArk’s Billy Chuang (left), solution engineering director for North Asia, and Sandy Lau, district manager for Hong Kong and Macau. Photo: CyberArk alt=CyberArk’s Billy Chuang (left), solution engineering director for North Asia, and Sandy Lau, district manager for Hong Kong and Macau. Photo: CyberArk>

The growth of cloud services has heightened both the incentives for and risks of this type of cyber threat. If a system uses a single form of digital identity to verify whether users are who they say they are, it is even more vulnerable.

“There is a chance that cookies will be stolen or be exposed to the third party and they use the cookie to access other applications or in-house resources,” said Sandy Lau, district manager for Hong Kong and Macau at CyberArk, an Israeli information security provider.

Hybrid work environments, such as using personal devices at work, may increase the risk of cyber theft, Lau added.

To address clients’ needs and the growing concerns around machine identities, CyberArk launched an identity-centric secure browser in March, which assists employees in separating work and personal applications and domains.

When Microsoft-backed start-up OpenAI launched ChatGPT in late 2022, it sparked an arms race among companies trying to one-up each other with their own large language models (LLMs) – the underlying technology – with ever larger data sets and sophisticated training methods.

Now there is a seemingly endless list of options for users looking for everything from a little help cleaning up their prose to defrauding people out of their life savings. Malicious actors are increasingly turning to LLMs to help with tasks such as generating text messages and sniffing out system vulnerabilities.

Hackers can use LLMs to generate queries to automate the process of finding weaknesses in a targeted network. Once they gain access, they can use LLMs again to further exploit vulnerabilities internally. The median time between a system’s initial compromise to the exfiltration of data was shortened to two days last year, a 45 per cent decline from the nine days it took in 2021, cybersecurity firm Palo Alto Networks concluded in a report published in March.

Phishing attacks – which include malicious links sent by email, text messages, or voice messages – remain the most common means of gaining access to a target’s system. LLMs have given a fresh makeover to an old scam, allowing more convincing messages to be sent out on a mass scale.

Fortunately, AI is also good at recognising fraudulent links when users might not be paying attention. The Hong Kong Computer Emergency Response Team Coordination Centre (HKCERT), the city’s information security watchdog, has been testing AI language models since May to help detect phishing websites and improve its risk alert system.

Cyber utopians heralded the invention of bitcoin as a revolution that would change life on the internet as we know it. Cryptocurrencies may not have revolutionised money for most people, but it has opened up a whole new way of siphoning off funds from unsuspecting users.

One common attack in the crypto sector targets a user’s wallet, which in many cases are made accessible through browser extensions. Scammers may create fake websites or phishing emails that look like they are coming from legitimate crypto services, tricking victims into revealing their private keys.

The logo of cryptocurrency platform JPEX arranged in Hong Kong on September 19, 2023. Photo: Bloomberg alt=The logo of cryptocurrency platform JPEX arranged in Hong Kong on September 19, 2023. Photo: Bloomberg>

These keys are one type of the single form of digital identity that cybersecurity experts have warned about. Anyone with the private key can gain access to everything in that wallet and send crypto tokens to a new location in an irreversible transaction.

The rise of decentralised finance, which does not rely on intermediaries like centralised crypto exchanges, has also created new risks. Self-executing smart contracts have increased the speed and efficiency of transactions, which some consider a perk but poses great challenges when it comes to fraud. Scammers are able to manipulate vulnerabilities in these contracts, which sometimes involve technical flaws in the code but could be as simple as taking advantage of lag in transaction times to fool a target into making a new transaction.

Hong Kong’s efforts to position itself as a Web3 business hub since the end of 2022 have invited both praise and criticism. Concerns about the type of business that crypto attracts were exacerbated last year when a seemingly fraudulent exchange called JPEX was tied to HK$1.5 billion in lost funds.

Some used the JPEX scandal, one of the largest financial frauds in the city’s history, to criticise regulators. Others said it proved Hong Kong was on the right track with regulations that took effect last year requiring cryptocurrency exchanges to be licensed.

This article originally appeared in the South China Morning Post (SCMP), the most authoritative voice reporting on China and Asia for more than a century. For more SCMP stories, please explore the SCMP app or visit the SCMP’s Facebook and Twitter pages. Copyright © 2024 South China Morning Post Publishers Ltd. All rights reserved.

Copyright (c) 2024. South China Morning Post Publishers Ltd. All rights reserved.

Source link : http://www.bing.com/news/apiclick.aspx?ref=FexRss&aid=&tid=66b5e44b52c9468cae239618ad547aa3&url=https%3A%2F%2Ffinance.yahoo.com%2Fnews%2Fhi-tech-fraud-technologies-changing-093000836.html&c=1771757471946546313&mkt=en-us

Author :

Publish date : 2024-08-08 22:30:00

Copyright for syndicated content belongs to the linked Source.

Exit mobile version