Ticker

6/recent/ticker-posts

Header Ads Widget

According to a Cybersecurity, AI Gives Threat Actors New Tools for Assaults

The widespread use of artificial intelligence (AI) and machine learning technologies in recent years has provided "threat actors with sophisticated new tools to perpetrate attacks," cybersecurity firm Kaspersky Research stated in a news statement on Saturday.

The security firm revealed that one such technique was deepfake, which generates human-like speech or photo and video reproductions of individuals. Kaspersky advised that organizations and consumers should be aware that deepfakes are likely to become a bigger issue in the future.
A deepfake, a combination of deep learning and fake, synthesized "fake images, video, and sound using artificial intelligence".

The security firm said that it discovered deepfake creation tools and services accessible on "darknet marketplaces" that may be exploited for fraud, identity theft, and stealing personal data.

"According to the estimates by Kaspersky experts, prices per one minute of a deepfake video can be purchased for as little as $300," according to a press release.

According to the press release, a recent Kaspersky poll indicated that 51% of employees in the Middle East, Turkey, and Africa could discern the difference between a deepfake and a real image. 

However, in a test, just 25% of participants could tell the difference between a genuine and an AI-generated picture. 

"This puts organizations at risk given how employees are often the primary targets of phishing and other social engineering attacks," the company said in a statement.

"Despite the fact that the technology for creating high-quality deepfakes is not yet widely available, one of the most likely use cases that will result from this is to generate voices in real-time to impersonate someone," the news release cited Hafeez Rehman, technical group manager at Kaspersky, as saying.

Rehman noted that deepfakes posed a threat not just to corporations, but also to individual individuals. "They spread misinformation, are used for scams, or to impersonate someone without consent," he added, emphasizing that they are a rising cyber danger that must be addressed.

The World Economic Forum's Global Risks Report 2024, issued in January, cautioned that AI-fueled disinformation was a shared concern for both India and Pakistan.

Deepfakes have been used in Pakistan to achieve political goals, notably ahead of general elections.
Former Prime Minister Imran Khan, who is presently detained at Adiala Jail, utilized an AI-generated image and voice clone to address an online election rally in December, which had over 1.4 million views on YouTube and was attended live by tens of thousands.

While Pakistan has prepared an AI law, digital rights advocates have criticized the absence of safeguards against misinformation and protecting vulnerable communities.