Ethics of Military Artificial Intelligence: Analysis of Principles, Regulatory Frameworks, and Future Challenges
DOI:
https://doi.org/10.25062/2955-0270.4943Keywords:
military ethics, artificial intelligence, moral, regulations, responsibility, military technologyAbstract
This chapter analyzes ethics in the use of artificial intelligence in military contexts, focusing on the evaluation of fundamental ethical principles such as responsibility, justice, and transparency. The goal is to understand how these principles can guide the development and responsible use of advanced technologies in the military field. The method used is an exhaustive review of the regulatory framework and specialized literature. The results highlight the importance of a rigorous ethical approach to prevent the misuse of AI and ensure respect for human rights. The conclusions highlight the need to constantly update regulations and guarantee the ethical training of the military, in order to mitigate potential risks and promote international peace and security.
Author Biography
Edgar Eduardo Zambrano González, Armada Nacional de Colombia
Oficial de la Armada Nacional de Colombia. Magister en Ciberseguridad y Ciberdefensa de la Escuela Superior de Guerra “General Rafael Reyes Prieto”, Colombia. Administrador de Empresas de la Universidad Militar Nueva Granada, Colombia.
References
Altmann, J., & Sauer, F. (2017). Autonomous weapon systems and strategic stability. Survival, 59(5), 117-142.
Asaro, P. (2012). On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross, 94(886), 687-709.
Boothby, W. (2021). Challenges to Article 36 Reviews Posed by Autonomous Weapons Systems (AWS). Cambridge University Press.
Bostrom, N., & Yudkowsky, E. (2018). The ethics of artificial intelligence. En Artificial intelligence safety and security. Chapman and Hall/CRC.
Comité Internacional de la Cruz Roja [CICR]. (2019, 6 de junio). Inteligencia artificial y aprendizaje automático en conflictos armados: un enfoque centrado en el ser humano. CICR.
Crootof, R. (2014). The killer robots are here: legal and policy implications. Cardozo Law Review, 36, 1837.
Cummings, M. (2017). Artificial intelligence and the future of warfare. Chatham House for the Royal Institute of International Affairs London.
European Commission. (2019). Ethics Guidelines for Trustworthy AI. Digital Strategy.
Floridi, L., & Sanders, J. W. (2004a). On the morality of artificial agents. Minds and machines, 14, 349-379.
Floridi, L., & Sanders, J. W. (2004b). On the morality of artificial agents. Minds and machines, 14, 349-379.
G20. (2021, 21 de junio). Boletín G20 Ed. 157: Reportaje especial sobre la regulación de la inteligencia artificial. G20.org.
Geist, E. M. (2016). It’s already too late to stop the AI arms race. We must manage it instead. Bulletin of the Atomic Scientists, 72(5), 318-321.
Gubrud, M. (2014). Stopping killer robots. Bulletin of the Atomic Scientists, 70(1), 32-42.
Heyns, C. (2016). A Human Rights Perspective on Autonomous Weapons in Armed Conflict: The Rights to Life and Dignity. Federal Foreign Office, Lethal Autonomous Weapons Systems Technology, Definition, Ethics, Law & Security, 148-160.
Lamas López, F., & Peralta Gutiérrez, A. (2023). Marco de Derecho Internacional Público y Usos Militares de la Inteligencia Artificial en la UE.
Lin, P., Abney, K., & Bekey, G. A. (2014). Robot ethics: the ethical and social implications of robotics. MIT press.
Organización de las Naciones Unidas. (2019). Informe del Grupo de Expertos Gubernamentales sobre Sistemas de Armas Autónomas Letales. ONU.
Organización de las Naciones Unidas. (2024). Sistemas de armas autónomos letales. ONU.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Planas Bou, C. (2023, 31 de octubre). G7 acuerda un código de conducta para la inteligencia artificial. elPeriódico.
Roff, H. M., & Moyes, R. (2016). Meaningful human control, artificial intelligence, and autonomous weapons. Briefing Paper Prepared for the Informal Meeting of Experts on Lethal Au-Tonomous Weapons Systems, UN Convention on Certain Conventional Weapons.
Sauer, F., & Schörnig, N. (2012). Killer drones: The ‘silver bullet’of democratic warfare? Security Dialogue, 43(4), 363-380.
Scharre, P. (2018). Army of none: Autonomous weapons and the future of war. WW Norton & Company.
Sharkey, N. (2016). Saying ‘no!’to lethal autonomous targeting. En Military ethics and emerging technologies (pp. 132-146). Routledge.
Sparrow, R. (2016). Robots and respect: Assessing the case against autonomous weapon systems. Ethics & International Affairs, 30(1), 93-116.
Stanley-Lockman, Z., & Christie, E. H. (2021, 25 de octubre). Una estrategia de inteligencia artificial para la OTAN. OTAN.
Stop Killer Robots. (2021). Se necesita un nuevo tratado para prohibir los robots asesinos. Stop killer robots. https://n9.cl/o8bv3
Villanueva, E. (2002). Deontología informativa, códigos deontológicos de la prensa escrita en el mundo (Vol. U). Universidad Iberoamericana.
Zeng, Y., Lu, E., & Huangfu, C. (2018). Linking artificial intelligence principles. arXiv preprint arXiv:1812.04814.
How to Cite
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Revista Ciberespacio, Tecnologia e Innovación

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.








