By Chester Cabalza and Amadeus Quiaoit

    The rise of cyber threats poses unprecedented challenges to national security, demanding a robust and coordinated response. 

    Just last year, the Philippines faced recent cyberattacks targeting government websites and private sector entities, with massive data breaches undermining national security and exposing the personal data of millions of its citizens. In the Philippines, Artificial Intelligence (AI) is slowly emerging. 

    On 30 July 2022, Republic Act No. 11927 also known as “An Act to Enhance the Philippine Digital Workforce Competitiveness, Establishing for the Purpose of an Inter-Agency Council for Development and Competitiveness of the Philippine Digital Workforce and Other Purposes” was approved. Based on the Implementing Rules and Regulations of the statute, relevant government agencies and stakeholders from the private sector shall promulgate the necessary means for the implementation of the law.

    This was braced by the Philippines’ Commission on Human Rights adhering to its benefits that it is a tool to help enhance and expand the socio-economic development of the country through digital jobs and entrepreneurship. It also ensured the safety of everyone in the digital space, and optimizes the opportunity that the Philippine government pays attention to digitalization. 

    Last year, House Bill No. 7396 was proposed, also known as “An Act Promoting the Development and Regulation of AI in the Philippines” which aims to regulate the rapid transformation of the global economy. It highlighted that AI could potentially transform and enhance services, and trigger economic growth. The proposed legislation recognizes the growing importance of the development of AI. The bill seeks to establish the Artificial Intelligence Development Authority (AIDA) which will serve as the oversight office that will facilitate the research, development, and deployment of AI technology. It would also conduct assessments to ensure that the technology will comply with the citizens’ guidelines, regulations, rights, and welfare

    The existence of AAI is certainly helpful especially since one of the government’s thrusts is digitalization. However, it is also important for the government to pay attention to how the contents in cyberspace would be managed without harming and compromising the privacy of citizens. The institution of laws and policies, as well as the creation of a regulating body are needed to monitor and address the use and development of AI. In this way, the digital economy, particularly content creators and users of cyberspace will be safeguarded.

    ChatGPT was considered one of the fastest-growing artificial intelligence applications. As of January 2023, the application has 100 million monthly active users. However, the application is not well received in some countries around the world. The utilization and integration of AI or ChatGPT in the Philippine economy lags behind compared to developed countries since the pace for adapting to the industrial revolution 4.0 remains low. Although the pool of ICT and AI talents in the Philippines is immense and stronger, the lack of opportunities for the gifted white hackers and AI developers remain poor. 

    Although, it appears that human intelligence remains largely and distantly valuable compared to AI when work and employment is concerned. For human intelligence, it is the objective and obligation of any institution to improve its human resources. In an ideal setting, it should strive to hone their skills, intellect, and critical thinking, and not rely on the supposed competency of AI-operated software. After all, the dominance of human intellect is still expressed by the fact that software such as ChatGPT was actually developed by humans themselves.

    Moreover, work ethics still seem to be of utmost significance. This further suggests that Filipinos, regardless if they are from the government, or any sector or institution, value humanity and its aspects such as virtues, intellect, and capabilities among others over any artificial means in making life easier especially if this constructed technology has questionable features that tread over ethics

    The penetration of ChatGPT in the Philippine technocracy appears to be already underway. ChatGPT specifically has been prevalent in producing speeches of bureaucrats. Government officials themselves reportedly were astonished, but were nonetheless impressed due to the alleged excellence of wordings. Developers of the ChatGPT and similar software paved a way for improving literacy as humankind writes its history and moves towards its future. Enhancements that address some of the disadvantages of AI software might be applied in the future, thereby blurring further the line between what people might think as ethical or otherwise, or what is accepted and what is not.

    In fact, some journalists and media companies are utilizing AI or ChatGPT in order to produce more news and stories and maximize publication that could lead to disinformation and fake news. But ChatGPT in journalism only provides summary or background material, but it cannot provide insights, perspectives, and stories of the people. 

    Since the majority of Filipinos do not have access to the internet due to high costs compared to other ASEAN countries. Internet connection for AI remains elusive. There is a lack of integration of possible emergence of AI in the development plan for the past administration which affects the current structure of the AI economy in the country. 

    However, AI alone poses a national security issue, as its machine learning algorithms are stored in unknown cloud storages or in physical data centers. These are prone to sabotage and free flow of information is an open invitation for hackers and trolls. The meteoric rise of AI, as a platform of technological development, brought a host of new weaponization to disrupt, penetrate, and disintegrate strategic communications and digital critical infrastructure. These network exploits can unclutter profound consequences in interdependent and interconnected spaces affecting national security. 

    Italy temporarily banned the application due to security implications it could cause to children ages younger than 13 years old. Further, countries like China, Hong Kong, Iran, Russia, and some parts of Africa made the application unavailable. Banning of ChatGPT in Italy, which was lifted in April 2023 sparked conversation among lawmakers. Despite the technological innovation it brings in this time and age, governments of different countries are carefully considering the application.

    Just like the Covid-19 pandemic, an AI powered malware can hack a local network and destroy essential safeguards in cybersecurity. Leaking classified information, tampering in real time communications, all aspects of information can be stolen, fabricated, or manipulated to our adversaries advantage. At the worst case scenario, an AI driven virus can serve as a weapon of mass destruction, wiping out entire networks, Wi-Fi connections, and entire national telecommunication grids. 

    This imperative calls for a better strategic adaptation on the part of the Armed Forces. The battlefield of the future may not be solely defined by guns and tanks, but also by lines of code and the silent wars waged within the digital realm. AI-powered computer worms, unlike their traditional counterparts, possess advanced capabilities that pose a significant threat to military operations at both tactical and operational levels.

    The stealthy nature and ability to adapt in real-time render them near invisible to conventional antivirus defenses, allowing them to infiltrate crucial military networks and systems. With pinpoint precision, these viruses can exploit vulnerabilities identified through vast data analysis, targeting high-value assets like command and control systems or sensitive information. This targeted approach can cripple entire operations, leaving militaries at a critical disadvantage.

    Beyond precision, AI brings automation to the forefront. Viruses can independently search for and exploit vulnerabilities across a wide range of systems, impacting everything from communication networks to logistics and intelligence databases. This operational-level disruption can hinder strategic decision-making, sow confusion, and ultimately compromise the success of entire campaigns. Data manipulation becomes a weapon of choice for AI. By altering or disrupting information, misinformation can be spread, leading to flawed decisions and chaos within military ranks. Denial-of-service attacks orchestrated by these viruses can further cripple communication channels, hindering coordination and leaving units vulnerable.

    Social engineering takes a sinister turn with the assistance of AI. By analyzing human behavior and communication patterns, viruses can craft highly convincing phishing attacks, potentially granting unauthorized access to critical systems. This vulnerability is further amplified by the virus’s ability to adapt at present, continuously evading countermeasures and extending its lifespan within the infected network.

    The integration of AI into autonomous systems, such as drones and unmanned vehicles, adds another layer of complexity to the battlefield. Compromising these systems can not only disrupt operations but also endanger lives and jeopardize mission objectives. The rapid propagation of such viruses through interconnected networks only amplifies their impact, potentially causing widespread disruptions and hindering an entire military’s readiness.

    To face this evolving threat, militaries must adopt a multifaceted approach. Developing AI-driven cybersecurity measures, employing constant monitoring and adaptation techniques, implementing robust incident response protocols, and training personnel to recognize and counter these threats are all crucial steps in securing the digital battlefield. The future of warfare may be increasingly fought in the realm of code, but by embracing proactive strategies and cutting-edge technology, militaries can maintain their edge and ensure victory in this silent war of pixels and algorithms.

    To bolster the country’s cyber defenses, the AFP recognizes the critical need for a dedicated cyber capability to effectively defend against cyberattacks, deter adversaries, and support national security objectives. During a conference held last 15 January 2024 presided by the Commander-in-Chief, President Ferdinand Marcos Jr., General Brawner mentioned the creation of the AFP Cyber Command to better protect the nation against threats in the cyber domain. In this regard, the AFP plans to shift towards the cyber domain and targets to establish a fully pledged “Cyber Command.”  This is a clear indication for the things to come in future military operations. 

    The accelerating increase of Artificial Intelligence technological capabilities and technical development, the Armed Forces of the Philippines must integrate cyberspace into its tactical, operational, and strategic warfighting capabilities

    Potential roster of cyber threats posed by weaponized Artificial Intelligence include AI-powered malware and automated exploits that cause potential mitigation on robust cybersecurity practices, ongoing risk assessments, and development of AI-driven defense mechanisms. For AI-enhanced phishing attacks, countermeasures are important utilizing AI-powered detection of phishing attempts through education campaigns. For AI-generated deep fakes, there are actual desires for source verifications, critical thinking skills, and advancement of deep fake detection technology. 

    The rise of supply chain vulnerabilities can be resolved by securing software development practices and supply chain risk management. The proliferation of AI-driven automated reconnaissance for cyber espionage would be arranged for network segmentation, intrusion detection systems, and threat intelligence. There are also new attack vectors in cloud environments via AI where cloud security best practices are necessary for the encryption of sensitive data in identity and access management. The weaponization of AI biases influencing military decisions can be mitigated through fair and ethical development of AI, including human oversight of AI systems. Lastly, cryptography in quantum computing needs development of post-quantum cryptography. 

    Authors:  Dr. Chester Cabalza,  founding president of the International Development and Security Cooperation (IDSC), a Manila-based think tank, and  Mr. Amadeus Quiaoit, a Research Associate of IDSC. 

    (The views expressed in this article belong  only to the authors and do not necessarily reflect the editorial policy or views of World Geostrategic Insights). 

    Share.