Breve reseña sobre el estado actual de la Inteligencia Artificial

Autores/as

  • Rafael Bello Pérez Universidad Central de Las Villas "Marta Abreu"
  • Alejandro Rosete Suárez Universidad Tecnológica de La Habana "José Antonio Echevarría", CUJAE

Palabras clave:

Inteligencia Artificial, transformación digital

Resumen

El propósito de este artículo es ofrecer al lector una panorámica de la Inteligencia Artificial hoy, sus principales métodos y logros así como su aplicación en la solución de diferentes problemas socio-económicos y científicos. Se presentan algunas de sus tendencias de desarrollo, y los retos que estos pudieran significar para el hombre, al integrarlas plenamente en prácticamente todas las facetas de la vida de la humanidad, y previendo posibles efectos negativos de su desempeño futuro. Esta conceptualización sirve de preámbulo para presentación de los trabajos incluidos en este número de la revista.

Citas

Abdollahi B. y Nasraoui, O. (2018). Transparency in Fair Machine Learning: the Case of Explainable Recommender Systems In Human and Machine Learning: Visible, Explainable, Trustworthy and Transparent, Jianlong Zhou and Fang Chen (Eds.). Springer International Publishing, Cham, 21–35

Alexander, V. et al. (2018). Why trust an algorithm? Performance, cognition, and neurophysiology. Computers in Human Behavior 89: 278-288

Asilomar (2017). AI Principles. Future of Life Institute, Recuperado de: https://futureoflife.org/ai-principles/

Axel-Montes, G. y Goertzel, B. (2019). Distributed, decentralized, and democratized artificial intelligence. Technological forecasting and Social Sciences 141:354-358

Baptiste-Lamy, J. et al. (2019). Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach. AI in Medicine 94:42-53.

Barredo-Arrieta, A., Díaz-Rodríguez, N., Del-Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatilaf, R. y Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58: 82–115. doi:10.1016/j.inffus.2019.12.012

Bughin, J. et al. (2017). AI: the next digital frontier. Discussion paper. McKinsey Global Institute.

Cannon, J. (2019). Report shows consumers don’t trust artificial intelligence, Recuperado de: https://www.fintechnews.org/report-shows-consumers-dont-trust-artificial-intelligence/

Chen, Z. y Liu, B. (2018). Lifelong Machine Learning. Second Edition, Morgan & Claypool. Print 1939-4608 Electronic 1939-4616. doi:10.2200/S00832ED1V01Y201802AIM037, p. 209.

Dietterich. T. G. (2017). Steps Toward Robust Artificial Intelligence. AI MAGAZINE, FALL.

EISMD (2018). AI4People’s Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations” (november). Recuperado de: http://www.eismd.eu/wp-content/uploads/2018/11/ Ethical-Framework-for-a-good-AI-Society.pdf.

Samoili, S., López-Cobo, M., Gómez, E., De Prato, G., Martínez-Plumed, F.,y Delipetrev, B. (2020). AI Watch. Defining Artificial Intelligence. Towards an operational definition and taxonomy of artificial intelligence, EUR 30117 EN, Publications Office of the European Union, Luxembourg, 2020, ISBN 978-92-76-17045-7, doi:10.2760/382730, JRC118163.

Guidotti, R. et al. (2018). A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv. 51(5):93 (august), doi: 10.1145/3236009.

Hassabis, D. et al. (2017). Neuroscience-Inspired Artificial Intelligence. Neuron 95, July(19): 245-258.

Hengstler, M. et al. (2016). Applied artificial intelligence and trust. The case of autonomous vehicles and medical assistance devices. Technological Forecasting & Social Change 105: 105–120

Hibbard, B. (2012). Avoiding Unintended AI Behaviors. Lecture Notes in Artificial Intelligence, 7716: 107-116

Holliday, D., et al. (2016). User trust in intelligent systems: A journey over time. In Proceedings of the 21st International Conference on Intelligent User Interfaces. 164-168. New York, USA: ACM

Hossein, M. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons 61, 577—586. Recuperado de: https://www.ibm.com/watson

IBM (2018). Trusted AI. Recuperado de: https://www.research.ibm.com/artificial-intelligence/trusted-ai/

IEEE (2018). IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. IEEE. Recuperado de: https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_general_principles.pdf

Jay-Kuo, C.C. et al. (2019). Interpretable convolutional neural networks via feedforward design. J. Vis. Commun. Image R. 60, 346–359

Karmakar, B., y Pal, N.R. (2018). How to make a neural network say “Don’t know”. Information Sciences 430–431, 444–466

LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature 521, 436–444

Lee, Y. et al. (2019). Egoistic and altruistic motivation: How to induce users’ willingness to help for imperfect AI. Computers in Human Behavior 101, 180-196

Li, Y. y Vasconcelos, N. (2019). REPAIR: Removing Representation Bias by Dataset Resampling. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 9572-9581. Recuperado de: arXiv: 1904.07911.

Lieto, A. et al. (2018). The role of cognitive architectures in general artificial intelligence. Cognitive Systems Research 48, 1–3

Luke, S. (2012). Essentials of Metaheuristics. ISBN: 978-0-557-14859-2. Recuperado de: http://cs.gmu.edu/∼sean/book/metaheuristics/

Mankowitz, D.J. et al. (2018). Unicorn: Continual learning with a universal, off-policy agent. Recuperado de: arXiv: 1802.08294v2.

McCarthy, J., Minsky, M.L., Rochester, N. y Shannon, C.E. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. August 31, 1955. Recuperado de: http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

Mehrabi, N. et al. (2019). A Survey on Bias and Fairness in Machine Learning. Recuperado de: arXiv: 1908.09635v2

Montavon, G. et al. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing 73, 1–15

Pichai, S. (2018). AI at Google: our principles. June 7. Recuperado de: https://www.blog.google/technology/ai/ai-principles/

Rodriguez, J. (2018). Towards AI Transparency: Four Pillars Required to Build Trust in Artificial Intelligence Systems. Recuperado de: https://www.linkedin.com/pulse/towards-ai-transparency-four-pillars-required-build-trust-rodriguez

Rusell, S. y Norvig, P. (2010) Artificial Intelligence: a modern approach. Pearson Education

Silver, D., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489

Silver, D. et al. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362, 1140–1144

Stone, P. et al. (2016). Artificial intelligence and Life in 2030, in One Hundred Year Study on Artificial Intelligence (AI100). Report of the 2015 Study Panel, Stanford University. Recuperado de: https://ai100.stanford.edu

Tommasi, T. et al. (2017). A deeper look at dataset bias. In Domain Adaptation in Computer Vision Applications, 37–55. Springer

Tomsett, R. et al. (2020). Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI. PATTER 1, July: 1-9. doi:10.1016/j.patter.2020.100049

Wang, W., y Siau, K. (2018). Trusting Artificial Intelligence in Healthcare. Twenty-fourth Americas Conference on Information Systems, New Orleans.

Descargas

Publicado

2021-03-07

Cómo citar

Bello Pérez, R. ., & Rosete Suárez, A. (2021). Breve reseña sobre el estado actual de la Inteligencia Artificial. Revista Cubana De Transformación Digital, 2(1), 01–13. Recuperado a partir de https://rctd.uic.cu/rctd/article/view/108