Innovation

Cybersecurity predictions for 2024

Navigating the evolving threat landscape
Michal Pechoucek
Chief Technology Officer
Published
December 7, 2023
Read time
8 Minutes
Cybersecurity predictions for 2024
Written by
Michal Pechoucek
Chief Technology Officer
Published
December 7, 2023
Read time
8 Minutes
Cybersecurity predictions for 2024
In this article
    Share this article

    As we stand on the brink of 2024, the nature of cyber threats is undergoing a profound transformation: We are now expecting the threat landscape to be filled with frequent, highly individualized attacks in 2024. The advancement of artificial intelligence (AI) will notably enable the development of sophisticated tools. Criminals will use these tools for targeted messaging in victims' languages, enhancing manipulation.

    Next year, we anticipate ransomware and scams that are designed to manipulate individuals emotionally. As we navigate this changing landscape, our predictions for the next year offer insights into the challenges ahead, as well as the measures we can adopt to fortify our digital defenses.

    Future advancements in AI and related risks

    The coming year will be a pivotal moment in the evolution of artificial intelligence, marking a period of significant transformation and emerging challenges. This era features rapid AI advancements, changing how these tools integrate into our lives. As AI becomes more embedded in our daily routines, its impact extends beyond mere technological innovation, influencing societal norms, privacy considerations and ethical boundaries.

    AI will undergo multiple evolutions

    We suspect a significant evolution in AI, especially in Large Language Models (LLMs) in 2024. Historically, LLMs have been cloud-based, relying on extensive server resources to produce text resembling human writing. The upcoming year, however, marks a pivotal shift towards more compact LLMs that function directly on user devices. This change transcends a simple relocation; it signifies a profound transformation in the integration of AI into our everyday activities and workflows.

    Several key factors drive the move towards device-based LLMs. Firstly, privacy demands are rising and data stored on devices are more private than data stored in the cloud. Local data processing on devices also enhances security, reducing cloud storage risks. Secondly, this shift promises enhanced speed and efficiency. Local processing eliminates latency issues often encountered with cloud-based solutions, leading to a more seamless and responsive user experience.

    Additionally, 2024 will be significant for generative AI, particularly in multi-type conversions. The evolving LLMs are not just limited to text generation; they are branching into more dynamic forms of media conversion.

    The text-to-video feature, allowing synthesized video from text, is a notable advancement. This capability will open up new vistas for content creators, educators and marketers, offering a tool to rapidly produce visually engaging material that resonates with their audience. However, it will also be misused for the creation and spread of scams and disinformation, as it will be progressively harder to recognize a truly recorded video from an AI-generated one.

    The development of text-to-voice AI is equally transformative. This technology goes beyond traditional text-to-speech systems, offering more nuanced and human-like voice generation. It holds immense potential, from creating more interactive and personalized customer service experiences to aiding those with visual impairments or reading difficulties.

    Evolving AI technologies raise questions about ethics, regulation and balancing innovation with user welfare. For businesses and individuals alike, the upcoming year promises to be a journey of discovery and adaptation, as these lightweight, multi-faceted generative AI solutions redefine our interaction with technology and information in profound ways.

    New tools bring new security challenges as generative AI is broadly adopted

    The increasing popularity of generative AI in business will bring new risks and challenges. One significant concern is the phenomenon of "Bring Your Own AI" (BYOAI), where employees use personal AI tools in the workplace, which we predict will become exponentially more popular.

    This practice poses a considerable risk of unintentional leakage of sensitive company secrets. Employees using personal AI for work may accidentally expose confidential data to third parties. On the flip side, corporate AI solutions will offer an increasing number of privacy-preserving features, which are frequently not available at the personal level.

    Social media will become a major attack vector for AI-related scams and disinformation

    We predict that the vast troves of personal information collected by social media companies over the past decade and a half will be utilized to target AI-generated ads and scams in the form of videos and static posts. Cybercriminals will use these platforms' reach to spread sophisticated scams and disinformation. Leveraging AI, they will be able to create and distribute highly customized and convincing content that blends seamlessly into users' feeds, targeting individuals based on their online behaviors, preferences and social networks.

    Scams may appear as fake news, deceptive ads, deepfakes or misleading direct messages. Social media's interactivity will facilitate quick and broad spread. This will make cybercrimes more efficient and challenging for users and platforms to counter.

    Business Email Compromise (BEC) attacks will utilize AI to create more sophisticated Business Communication Compromise (BCC) attacks

    In 2024, we will witness a significant evolution in Business Communication Compromise (BCC) attacks (formerly referred to as Business Email Compromise or BEC attacks), as cybercriminals increasingly adopt AI and deepfake technologies to execute more sophisticated and convincing scams.

    Cybercriminals will create deepfakes mimicking executives or partners. This will challenge employees in distinguishing legitimate from fraudulent requests, particularly when quick decisions are needed.

    These enhanced BEC/BCC attacks will lead to financial losses and erode trust within organizations. Companies could encounter reduced effectiveness in communication and internal mistrust, as employees grow increasingly wary and doubtful of digital interactions.

    A two-factor authentication-like solution is expected in response to these threats. These changes will mandate the verification of requests through a separate, independent channel, like a person-to-person interaction or secured phone call.

    The dark side of ChatGPT's fame: Malware on the rise

    The increasing popularity of AI tools like ChatGPT has attracted the attention of cybercriminals. We expect increased attempts by attackers to exploit AI solution-seekers. This includes deceptive “GPT” apps or plugins used for data theft or malware distribution. Users might think these malicious tools are legitimate AI solutions, downloading them only to compromise their systems and data. 

    We also anticipate attempts by malicious entities to "hack" LLMs with the aim of accessing valuable information, such as training data, model configurations, internal algorithms or other sensitive internal details. Furthermore, the threat actors might backdoor public LLMs, potentially stealing user inputs, IP and PII details.

    Finally, we foresee the development of new malicious LLMs like "WormGPT." In contrast to commercial models—which include built-in safeguards—these malicious models are designed to support the generation of malicious content.

    Digital blackmail will evolve and become more targeted

    Digital blackmail is rapidly evolving and becoming more targeted. This change is not limited to ransomware attacks, it encompasses a variety of tactics aimed at high-value targets. Notably, sophisticated data exfiltration shows the shifting nature and severity of these threats. As we move forward, this trend signifies a move towards more intricate and damaging forms of digital extortion.

    Ransomware will become more complex and damaging

    Cybercriminals mainly use encrypted or stolen data to demand ransoms or sell it, but we foresee a rise in more harmful data abuse tactics. This may involve data brokers exploiting information for identity theft, targeting both employees and customers, or to steal a company’s assets. This shift points to a more complex and harmful ransomware impact on businesses.

    Evolving attack methods: exploiting VPN and cloud infrastructure

    Expect evolving ransomware delivery methods, including more sophisticated VPN infrastructure exploitation. This tactic presents a formidable challenge for organizations relying on VPNs for remote work and secure communications. 

    Recent security incidents are troubling for companies that believe being in a cloud resolves all security concerns. Many of them recently learned a hard lesson that attacks such as cloud authentication token theft are real and impactful. We should expect a significant increase in cloud infrastructure attacks, leading to more extortion.

    Diversification of extortion methods beyond encryption

    In addition to the above threats, we predict a rise in extortion emails like sextortion and business threats. These emails, typically disseminated through botnets, use intense scare tactics but are often repetitive. In 2024, expect a surge in creative email extortion. This could include the generation of falsified images or the introduction of new subjects for extortion, further complicating the cybersecurity landscape.

    Threat delivery will become more sophisticated on mobile

    Mobile cyberthreats are rapidly evolving in sophistication. We anticipate an increase in unethical practices such as instant loan apps resulting in data theft and extortion, the rise of trojanized chat apps filled with spyware and crypto wallet thieves, and a shift in malware distribution methods, favoring web-based threats through social media, messaging, or email, rather than attacks through apps.

    Instant loans as a lure into blackmail and extortion

    There is a growing concern about the rise in malicious practices within the instant loan app industry as the financial industry continues to advance. With the rise of these apps, some lenders use unethical tactics for repayments. More instant loan apps are stealing data from delinquent borrowers and rogue apps misuse user data to pressure repayments and add high late fees. This practice is increasingly common and is expected to intensify in 2024. 

    Trojanized chat apps with spyware and stealing modules

    There could be a rise in fake chat apps with hidden crypto-stealing features or spyware. These fraudulent apps may abuse the trust users have in communication platforms, penetrating devices to extract confidential information or cryptocurrency keys. These attacks could include advanced social engineering methods, persuading users to give extensive permissions under the pretense of adding chat features that are not present in standard chat clients.

    Shifts in the delivery techniques of mobile threats

    Distributing web-based threats like phishing is much easier than it would be on native applications (and often with a higher return on investment). This is a key reason for the steep rise in such attack types on mobile devices, where they account for more than 80% of all attacks. We predict that this trend will persist, with more malicious actors using web-delivery attack methods rather than using native mobile applications.

    Rising threats in the cryptocurrency sphere

    The evolving cryptocurrency landscape introduces significant cybersecurity risks. The decentralized nature of these platforms offers anonymity but lacks the traditional safeguards of financial institutions. This absence of oversight and the inability to reverse transactions or monitor for fraudulent activities amplify the risks associated with these digital assets. 

    An increased focus on crypto wallets by cybercriminals

    Cybercriminals are quickly adapting to the growing interest in cryptocurrencies. We're observing a marked increase in attacks targeting crypto wallets. Attackers are increasingly targeting crypto wallets with refined and sophisticated methods. This trend is likely to intensify, posing a significant threat to individual and institutional investors in the cryptocurrency space.

    Malware as a service will continue to evolve

    Services like Lumma — a Malware-as-a-Service (MaaS) product targeting cryptocurrency wallets — are evolving with new features and better anti-detection capabilities. These MaaS companies act similarly to corporations, with malware advancements funded by profits from their malicious activities.

    A new technique is the renewal of stolen cookies. This method enables attackers to retain access to hacked online accounts, even when passwords are updated. The consequences for online account security are significant; and we anticipate this will profoundly affect the coming year, requiring innovative strategies in digital security and account management.

    Vulnerabilities in crypto exchanges and cross-currency transactions

    Crypto exchanges are facing more attacks, leading to significant financial losses. A unique risk lies in the vulnerability of protocols allowing currency exchange. These exchange protocols can be manipulated to issue funds or alter balances, challenging users and experts. Given the potential for significant financial damage and the complex nature of these attacks, there is an urgent need for stronger security measures and greater vigilance in the cryptocurrency exchange sector.

    Using accessible formal verification systems and SAT solvers, attackers analyze crypto system source codes for vulnerabilities and exploit them to steal cryptocurrency. As a result, we anticipate more high-volume frauds next year.

    Conclusion

    The cybersecurity predictions for 2024 underscore a landscape in flux, dominated by the dual forces of AI's promise and peril. While AI tools can be leveraged for protection, their misuse by cybercriminals presents a significant challenge. 

    As we look to the future, it’s clear that a proactive and educated stance on cybersecurity is not just advisable — it is imperative. Our strategies must evolve in tandem with the threats we face, ensuring that we remain one step ahead in the ever-escalating cyber arms race.

    Michal Pechoucek
    Chief Technology Officer
    Michal leads the core technology, innovation and R&D teams driving security engines at Gen, as well as the technology vision for human-centered digital safety and beyond.
    Follow us for more