La ética de los algoritmos
Un mapeo del debate
DOI:
https://doi.org/10.29166/csociales.v1i44.4213Palabras clave:
algoritmos, automatización, Big data, analisis de datos, minería de datos, etica, machine learningResumen
En las sociedades de la información, las operaciones, decisiones y elecciones que previamente se otorgaban a los humanos están crecientemente siendo delegadas a algoritmos, los cuales podrían recomendar, o más aun, decidir cómo se deben interpretar los datos y qué acciones deben ser tomadas como resultado. Con mayor asiduidad, los algoritmos median procesos sociales, transacciones bursátiles, decisiones de gobierno y la manera en que percibimos, entendemos e interactuamos entre nosotros y con el ambiente. Las brechas entre el diseño y operación de los algoritmos y nuestra comprensión de sus implicancias éticas podrían tener consecuencias severas afectando individuos, así como también grupos y sociedades completas. Este artículo realiza tres contribuciones para clarificar la importancia ética de la mediación algorítmica: provee un mapa prescriptivo para organizar el debate, revisa la discusión actual acerca de los aspectos éticos de los algoritmos, y evalúa la literatura disponible de manera de identificar áreas que requieran de mayor trabajo para desarrollar la ética de los algoritmos.
Descargas
Citas
Adler P, Falk C, Friedler SA, et al. (2016) Auditing black-box models by obscuring features. arXiv:1602.07043
[cs, stat]. Available at: http://arxiv.org/abs/1602.07043 (accessed 5 March 2016).
Agrawal R, Srikant R (2000) Privacy-preserving data mining. ACM Sigmod Record, ACM, pp. 439–450.
Available at: http://dl.acm.org/citation.cfm?id=335438 (accessed 20 August 2015).
Allen C, Wallach W, Smit I (2006) Why machine ethics?. Intelligent Systems, IEEE 21(4): Available at:
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1667947 (accessed 1 January 2006).
Ananny M (2016) Toward an ethics of algorithms convening, observation, probability, and timeliness.
Science, Technology & Human Values 41(1): 93–117.
Anderson M, Anderson SL (2007) Machine ethics: Creating an ethical intelligent agent. AI Magazine 28(4):
Anderson M and Anderson SL (2014) Toward ethical intelligent autonomous healthcare agents: A case-
supported principle-based behavior paradigm. Available at: http://doc.gold.ac.uk/aisb50/AISB50-
S17/AISB50-S17-Anderson-Paper.pdf (accessed 24 August 2015).
Anderson SL (2008) Asimov's ‘Three Laws of Robotics’ and machine metaethics. AI and Society 22(4):
–493.
Applin SA, Fischer MD (2015) New technologies and mixed-use convergence: How humans and algorithms
are adapting to each other. In: 2015 IEEE international symposium on technology and society (ISTAS), Dublin,
Ireland: IEEE, pp. 1–6. .
Arendt H (1971) Eichmann in Jerusalem: A Report on the Banality of Evil, New York: Viking Press.
Barnet BA (2009) Idiomedia: The rise of personalized, aggregated content. Continuum 23(1): 93–99.
Barocas S (2014) Data mining and the discourse on discrimination. Available at:
https://dataethics.github.io/proceedings/DataMiningandtheDiscourseOnDiscrimination.pdf (accessed 20
December 2015).
Barocas S, Selbst AD (2015) Big data's disparate impact, SSRN Scholarly Paper, Rochester, NY: Social Science
Research NetworkAvailable at: http://papers.ssrn.com/abstract=2477899 (accessed 16 October 2015).
Bello P, Bringsjord S (2012) On how to build a moral machine. Topoi 32(2): 251–266.
Birrer FAJ (2005) Data mining to combat terrorism and the roots of privacy concerns. Ethics and Information
Technology 7(4): 211–220.
Bozdag E (2013) Bias in algorithmic filtering and personalization. Ethics and Information Technology 15(3):
–227.
Brey P, Soraker JH (2009) Philosophy of Computing and Information Technology, Elsevier.
Burrell J (2016) How the machine ‘thinks:’ Understanding opacity in machine learning algorithms. Big Data &
Society 3(1): 1–12.
Calders T, Verwer S (2010) Three naive Bayes approaches for discrimination-free classification. Data Mining
and Knowledge Discovery 21(2): 277–292.
Calders T, Kamiran F and Pechenizkiy M (2009) Building classifiers with independency constraints. In: Data
mining workshops, 2009. ICDMW'09. IEEE international conference on, Miami, USA, IEEE, pp. 13–18.
Cardona B (2008) ‘Healthy ageing’ policies and anti-ageing ideologies and practices: On the exercise of
responsibility. Medicine, Health Care and Philosophy 11(4): 475–483.
Coeckelbergh M (2013) E-care as craftsmanship: Virtuous work, skilled engagement, and information
technology in health care. Medicine, Health Care and Philosophy 16(4): 807–816.
Cohen IG, Amarasingham R, Shah A, et al. (2014) The legal and ethical concerns that arise from using
complex predictive analytics in health care. Health Affairs 33(7): 1139–1147.
Coll S (2013) Consumption as biopower: Governing bodies with loyalty cards. Journal of Consumer Culture
(3): 201–220.
Crawford K (2016) Can an algorithm be agonistic? Ten scenes from life in calculated publics. Science,
Technology & Human Values 41(1): 77–92.
Crnkovic GD, Çürüklü B (2011) Robots: ethical by design. Ethics and Information Technology 14(1): 61–71.
Danna A, Gandy OHJr (2002) All that glitters is not gold: Digging beneath the surface of data mining. Journal
of Business Ethics 40(4): 373–386.
Datta A, Sen S and Zick Y (2016) Algorithmic transparency via quantitative input influence. In: Proceedings of
th IEEE symposium on security and privacy, San Jose, USA. Available at: http://www.ieee-
security.org/TC/SP2016/papers/0824a598.pdf (accessed 30 June 2016).
Davis M, Kumiega A, Van Vliet B (2013) Ethics, finance, and automation: A preliminary survey of problems in
high frequency trading. Science and Engineering Ethics 19(3): 851–874.
de Vries K (2010) Identity, profiling algorithms and a world of ambient intelligence. Ethics and Information
Technology 12(1): 71–85.
Diakopoulos N (2015) Algorithmic accountability: Journalistic investigation of computational power
structures. Digital Journalism 3(3): 398–415.
Diamond GA, Pollock BH, Work JW (1987) Clinician decisions and computers. Journal of the American College
of Cardiology 9(6): 1385–1396.
Domingos P (2012) A few useful things to know about machine learning. Communications of the ACM
(10): 78–87.
Dwork C, Hardt M, Pitassi T, et al. (2011) Fairness through awareness. arXiv:1104.3913 [cs]. Available at:
http://arxiv.org/abs/1104.3913 (accessed 15 February 2016).
Elish MC (2016) Moral crumple zones: Cautionary tales in human–robot interaction (WeRobot 2016). SSRN.
Available at: http://papers.ssrn.com/sol3/Papers.cfm?abstract_id=2757236 (accessed 30 June 2016).
European Commission (2012) Regulation of the European Parliament and of the Council on the Protection of
Individuals with regard to the processing of personal data and on the free movement of such data (General
Data Protection Regulation), Brussels: European CommissionAvailable at: http://ec.europa.eu/justice/data-
protection/document/review2012/com_2012_11_en.pdf (accessed 2 April 2013).
Feynman R (1974) ‘Cargo cult science’ – by Richard Feynman. Available at:
http://neurotheory.columbia.edu/∼ken/cargo_cult.html (accessed 3 September 2015).
Floridi L (2008) The method of levels of abstraction. Minds and Machines 18(3): 303–329.
Floridi L (2011) The informational nature of personal identity. Minds and Machines 21(4): 549–566.
Floridi L (2012) Big data and their epistemological challenge. Philosophy & Technology 25(4): 435–437.
Floridi L (2014) The Fourth Revolution: How the Infosphere is Reshaping Human Reality, Oxford: OUP.
Floridi L and Sanders JW (2004a) On the morality of artificial agents. Minds and Machines 14(3). Available at:
http://dl.acm.org/citation.cfm?id=1011949.1011964 (accessed 1 August 2004).
Floridi L and Sanders JW (2004b) On the morality of artificial agents. Minds and Machines 14(3). Available at:
http://dl.acm.org/citation.cfm?id=1011949.1011964 (accessed 1 August 2004).
Floridi L, Fresco N, Primiero G (2014) On malfunctioning software. Synthese 192(4): 1199–1220.
Friedman B, Nissenbaum H (1996) Bias in computer systems. ACM Transactions on Information Systems
(TOIS) 14(3): 330–347.
Fule P and Roddick JF (2004) Detecting privacy and ethical sensitivity in data mining results. In: Proceedings
of the 27th Australasian conference on computer science – Volume 26, Dunedin, New Zealand, Australian
Computer Society, Inc., pp. 159–166. Available at: http://dl.acm.org/citation.cfm?id=979942 (accessed 24
August 2015).
Gadamer HG (2004) Truth and Method, London: Continuum International Publishing Group.
Glenn T, Monteith S (2014) New measures of mental state and behavior based on data collected from
sensors, smartphones, and the internet. Current Psychiatry Reports 16(12): 1–10.
Goldman E (2006) Search engine bias and the demise of search engine utopianism. Yale Journal of Law &
Technology 8: 188–200. .
Granka LA (2010) The politics of search: A decade retrospective. The Information Society 26(5): 364–374.
Grindrod P (2014) Mathematical Underpinnings of Analytics: Theory and Applications, Oxford: OUP.
Grodzinsky FS, Miller KW, Wolf MJ (2010) Developing artificial agents worthy of trust: ‘Would you buy a used
car from this artificial agent?’. Ethics and Information Technology 13(1): 17–27.
Hacking I (2006) The Emergence of Probability: A Philosophical Study of Early Ideas about Probability,
Induction and Statistical Inference, Cambridge: Cambridge University Press.
Hajian S, Domingo-Ferrer J (2013) A methodology for direct and indirect discrimination prevention in data
mining. IEEE Transactions on Knowledge and Data Engineering 25(7): 1445–1459.
Hajian S, Monreale A, Pedreschi D, et al. (2012) Injecting discrimination and privacy awareness into pattern
discovery. In: Data mining workshops (ICDMW), 2012 IEEE 12th international conference on, Brussels,
Belgium, IEEE, pp. 360–369. Available at: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6406463
(accessed 3 November 2015).
Hildebrandt M (2008) Defining profiling: A new type of knowledge? In: Hildebrandt M, Gutwirth S (eds)
Profiling the European Citizen, the Netherlands: Springer, pp. 17–45. Available at:
http://link.springer.com/chapter/10.1007/978-1-4020-6914-7_2 (accessed 14 May 2015).
Hildebrandt M (2011) Who needs stories if you can get the data? ISPs in the era of big number crunching.
Philosophy & Technology 24(4): 371–390.
Hildebrandt M, Koops B-J (2010) The challenges of ambient law and legal protection in the profiling era. The
Modern Law Review 73(3): 428–460.
Hill RK (2015) What an algorithm is. Philosophy & Technology 29(1): 35–59.
Illari PM, Russo F (2014) Causality: Philosophical Theory Meets Scientific Practice, Oxford: Oxford University
Press.
Introna LD, Nissenbaum H (2000) Shaping the Web: Why the politics of search engines matters. The
Information Society 16(3): 169–185.
Ioannidis JPA (2005) Why most published research findings are false. PLoS Medicine 2(8): e124.
James G, Witten D, Hastie T, et al. (2013) An Introduction to Statistical Learning Vol. 6, New York: Springer.
Johnson JA (2006) Technology and pragmatism: From value neutrality to value criticality, SSRN Scholarly
Paper, Rochester, NY: Social Science Research NetworkAvailable at:
http://papers.ssrn.com/abstract=2154654 (accessed 24 August 2015).
Johnson JA (2013) Ethics of data mining and predictive analytics in higher education, SSRN Scholarly Paper,
Rochester, NY: Social Science Research NetworkAvailable at: http://papers.ssrn.com/abstract=2156058
(accessed 22 July 2015).
Kamiran F and Calders T (2010) Classification with no discrimination by preferential sampling. In:
Proceedings of the 19th machine learning conf. Belgium and the Netherlands, Leuven, Belgium. Available at:
http://wwwis.win.tue.nl/∼tcalders/pubs/benelearn2010 (accessed 24 August 2015).
Kamishima T, Akaho S, Asoh H, et al. (2012) Considerations on fairness-aware data mining. In: IEEE 12th
International Conference on Data Mining Workshops, Brussels, Belgium. 378–385. Available at:
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6406465 (accessed 3 November 2015).
Kim B, Patel K, Rostamizadeh A, et al. (2015) Scalable and interpretable data representation for high-
dimensional, complex data. AAAI. 1763–1769.
Kim H, Giacomin J, Macredie R (2014) A qualitative study of stakeholders' perspectives on the social network
service environment. International Journal of Human–Computer Interaction 30(12): 965–976.
Kitchin R (2016) Thinking critically about and researching algorithms. Information, Communication & Society.
(1): 14–29.
Kornblith H (2001) Epistemology: Internalism and Externalism, Oxford: Blackwell.
Kraemer F, van Overveld K, Peterson M (2011) Is there an ethics of algorithms?. Ethics and Information
Technology 13(3): 251–260.
Lazer D, Kennedy R, King G, et al. (2014) The parable of Google flu: Traps in big data analysis. Science
(6176): 1203–1205.
Leese M (2014) The new profiling: Algorithms, black boxes, and the failure of anti-discriminatory safeguards
in the European Union. Security Dialogue 45(5): 494–511.
Levenson JL, Pettrey L (1994) Controversial decisions regarding treatment and DNR: An algorithmic Guide for
the Uncertain in Decision-Making Ethics (GUIDE). American Journal of Critical Care: An Official Publication,
American Association of Critical-Care Nurses 3(2): 87–91.
Lewis SC, Westlund O (2015) Big data and journalism. Digital Journalism 3(3): 447–466.
Lomborg S, Bechmann A (2014) Using APIs for data collection on social media. Information Society 30(4):
–265.
Lou Y, Caruana R, Gehrke J, et al. (2013) Accurate intelligible models with pairwise interactions. In:
Proceedings of the 19th ACM SIGKDD international conference on knowledge discovery and data mining,
Chicago, USA, ACM, pp. 623–631.
Louch MO, Mainier MJ and Frketich DD (2010) An analysis of the ethics of data warehousing in the context
of social networking applications and adolescents. In: 2010 ISECON Proceedings, Vol. 27 no. 1392, Nashville,
USA.
Lupton D (2014) The commodification of patient opinion: The digital patient experience economy in the age
of big data. Sociology of Health & Illness 36(6): 856–869.
MacIntyre A (2007) After Virtue: A Study in Moral Theory, 3rd ed. London: Gerald Duckworth & Co Ltd.
Revised edition.
Macnish K (2012) Unblinking eyes: The ethics of automating surveillance. Ethics and Information Technology
(2): 151–167.
Mahajan RL, Reed J, Ramakrishnan N, et al. (2012) Cultivating emerging and black swan technologies. ASME
International Mechanical Engineering Congress and Exposition, Houston, USA. 549–557. .
Markowetz A, Błaszkiewicz K, Montag C, et al. (2014) Psycho-informatics: Big data shaping modern
psychometrics. Medical Hypotheses 82(4): 405–411.
Matthias A (2004) The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics
and Information Technology 6(3): 175–183.
Mayer-Schönberger V, Cukier K (2013) Big Data: A Revolution that will Transform How We Live, Work and
Think, London: John Murray.
Mazoué JG (1990) Diagnosis without doctors. Journal of Medicine and Philosophy 15(6): 559–579.
Miller B, Record I (2013) Justified belief in a digital age: On the epistemic implications of secret Internet
technologies. Episteme 10(2): 117–134.
Mittelstadt BD, Floridi L (2016) The ethics of big data: Current and foreseeable issues in biomedical contexts.
Science and Engineering Ethics 22(2): 303–341.
Mohler GO, Short MB, Brantingham PJ, et al. (2011) Self-exciting point process modeling of crime. Journal of
the American Statistical Association 106(493): 100–108.
Moor JH (2006) The nature, importance, and difficulty of machine ethics. Intelligent Systems, IEEE 21(4).
Available at: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1667948 (accessed 1 January 2006).
Morek R (2006) Regulatory framework for online dispute resolution: A critical view. The University of Toledo
Law Review 38: 163.
Naik G, Bhide SS (2014) Will the future of knowledge work automation transform personalized medicine?.
Applied & Translational Genomics, Inaugural Issue 3(3): 50–53.
Nakamura L (2013) Cybertypes: Race, Ethnicity, and Identity on the Internet, New York: Routledge.
Newell S, Marabelli M (2015) Strategic opportunities (and challenges) of algorithmic decision-making: A call
for action on the long-term societal effects of ‘datification’. The Journal of Strategic Information Systems
(1): 3–14.
Neyland D (2016) Bearing accountable witness to the ethical algorithmic system. Science, Technology &
Human Values 41(1): 50–76.
Orseau L and Armstrong S (2016) Safely interruptible agents. Available at:
http://intelligence.org/files/Interruptibility.pdf (accessed 12 September 2016).
Pariser E (2011) The Filter Bubble: What the Internet is Hiding from You, London: Viking.
Pasquale F (2015) The Black Box Society: The Secret Algorithms that Control Money and Information,
Cambridge: Harvard University Press.
Patterson ME, Williams DR (2002) Collecting and Analyzing Qualitative Data: Hermeneutic Principles,
Methods and Case Examples. Advances in tourism Application Series, Champaign, IL, Champaign, USA:
Sagamore Publishing, IncAvailable at: http://www.treesearch.fs.fed.us/pubs/29421 (accessed 7 November
.
Portmess L, Tower S (2014) Data barns, ambient intelligence and cloud computing: The tacit epistemology
and linguistic representation of Big Data. Ethics and Information Technology 17(1): 1–9.
Raymond A (2014) The dilemma of private justice systems: Big Data sources, the cloud and predictive
analytics. Northwestern Journal of International Law & Business, Forthcoming. Available at:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2469291 (accessed 22 July 2015).
Romei A, Ruggieri S (2014) A multidisciplinary survey on discrimination analysis. The Knowledge Engineering
Review 29(5): 582–638.
Rubel A, Jones KML (2014) Student privacy in learning analytics: An information ethics perspective. SSRN
Scholarly Paper, Rochester, NY: Social Science Research NetworkAvailable at:
http://papers.ssrn.com/abstract=2533704 (accessed 22 July 2015).
Sametinger J (1997) Software Engineering with Reusable Components, Berlin: Springer Science & Business
Media.
Sandvig C, Hamilton K, Karahalios K, et al. (2014) Auditing algorithms: Research methods for detecting
discrimination on internet platforms. Data and Discrimination: Converting Critical Concerns into Productive
Inquiry. Available at: http://social.cs.uiuc.edu/papers/pdfs/ICA2014-Sandvig.pdf (accessed 13 February
.
Schermer BW (2011) The limits of privacy in automated profiling and data mining. Computer Law & Security
Review 27(1): 45–52.
Shackelford SJ, Raymond AH (2014) Building the virtual courthouse: Ethical considerations for design,
implementation, and regulation in the world of Odr. Wisconsin Law Review. 3, 615–657.
Shannon CE, Weaver W (1998) The Mathematical Theory of Communication, Urbana: University of Illinois
Press.
Simon J (2010) The entanglement of trust and knowledge on the web. Ethics and Information Technology
(4): 343–355.
Simon J (2015) Distributed epistemic responsibility in a hyperconnected era. In: Floridi L (ed.) The Onlife
Manifesto, Springer International Publishing, pp. 145–159. Available at:
http://link.springer.com/chapter/10.1007/978-3-319-04093-6_17 (accessed 17 June 2016).
Stark M, Fins JJ (2013) Engineering medical decisions. Cambridge Quarterly of Healthcare Ethics 22(4):
–381.
Sullins JP (2006) When is a robot a moral agent? Available at:
http://scholarworks.calstate.edu/xmlui/bitstream/handle/10211.1/427/Sullins%20Robots-
Moral%20Agents.pdf?sequence=1 (accessed 20 August 2015).
Sweeney L (2013) Discrimination in online ad delivery. Queue 11(3): 10:10–10:29.
Swiatek MS (2012) Intending to err: The ethical challenge of lethal, autonomous systems. Ethics and
Information Technology14(4). Available at: https://www.scopus.com/inward/record.url?eid=2-s2.0-
&partnerID=40&md5=018033cfd83c46292370e160d4938ffa (accessed 1 January 2012).
Taddeo M (2010) Modelling trust in artificial agents, a first step toward the analysis of e-trust. Minds and
Machines 20(2): 243–257.
Taddeo M, Floridi L (2015) The debate on the moral responsibilities of online service providers. Science and
Engineering Ethics. 1–29.
Taylor L, Floridi L, van der Sloot B (2017) Group Privacy: New Challenges of Data Technologies, 1st ed. New
York, NY: Springer.
Tene O and Polonetsky J (2013a) Big data for all: Privacy and user control in the age of analytics. Available at:
http://heinonlinebackup.com/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/nwteintp11§ion=20
(accessed 2 October 2014).
Tene O and Polonetsky J (2013b) Big Data for all: Privacy and user control in the age of analytics. Available
at: http://heinonlinebackup.com/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/nwteintp11§ion=20
(accessed 2 October 2014).
Tonkens R (2012) Out of character: On the creation of virtuous machines. Ethics and Information Technology
(2): 137–149.
Tufekci Z (2015) Algorithmic harms beyond Facebook and Google: Emergent challenges of computational
agency. Journal on Telecommunications and High Technology Law 13: 203.
Turilli M (2007) Ethical protocols design. Ethics and Information Technology 9(1): 49–62.
Turilli M, Floridi L (2009) The ethics of information transparency. Ethics and Information Technology 11(2):
–112.
Turner R (2016) The philosophy of computer science. Spring 2016. In: Zalta EN (ed.) The Stanford
Encyclopedia of Philosophy. Available at: http://plato.stanford.edu/archives/spr2016/entries/computer-
science/ (accessed 21 June 2016).
Tutt A (2016) An FDA for algorithms. SSRN Scholarly Paper, Rochester, NY: Social Science Research
NetworkAvailable at: http://papers.ssrn.com/abstract=2747994 (accessed 13 April 2016).
Valiant LG (1984) A theory of the learnable. Communications of the Journal of the ACM 27: 1134–1142.
van den Hoven J, Rooksby E (2008) Distributive justice and the value of information: A (broadly) Rawlsian
approach. In: van den Hoven J, Weckert J (eds) Information Technology and Moral Philosophy, Cambridge:
Cambridge University Press, pp. 376–396.
Van Otterlo M (2013) A machine learning view on profiling. In: Hildebrandt M, de Vries K (eds) Privacy, Due
Process and the Computational Turn-Philosophers of Law Meet Philosophers of Technology, Abingdon:
Routledge, pp. 41–64.
Van Wel L, Royakkers L (2004) Ethical issues in web data mining. Ethics and Information Technology 6(2):
–140.
Vasilevsky NA, Brush MH, Paddock H, et al. (2013) On the reproducibility of science: Unique identification of
research resources in the biomedical literature. PeerJ 1: e148.
Vellido A, Martín-Guerrero JD, Lisboa PJ (2012) Making machine learning models interpretable. In: ESANN
proceedings, Bruges, Belgium, pp. 163–172.
Wiegel V, van den Berg J (2009) Combining moral theory, modal logic and mas to create well-behaving
artificial agents. International Journal of Social Robotics 1(3): 233–242.
Wiener N (1988) The Human Use of Human Beings: Cybernetics and Society, Da Capo Press.
Wiltshire TJ (2015) A prospective framework for the design of ideal artificial moral agents: Insights from the
science of heroism in humans. Minds and Machines 25(1): 57–71.
Zarsky T (2013) Transparent predictions. University of Illinois Law Review 2013(4). Available at:
http://papers.ssrn.com/sol3/Papers.cfm?abstract_id=2324240 (accessed 17 June 2016).
Zarsky T (2016) The trouble with algorithmic decisions an analytic road map to examine efficiency and
fairness in automated and opaque decision making. Science, Technology & Human Values 41(1): 118–132.
Descargas
Publicado
Cómo citar
Número
Sección
Licencia
Derechos de autor 2023 Agustina Lassi
Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial-SinDerivadas 4.0.
Política de acceso abierto
La revista Ciencias Sociales adhiere al modelo Acceso Abierto en el que los contenidos de las publicaciones científicas se encuentran disponibles a texto completo libre y gratuito en Internet, sin embargos temporales, y cuyos costos de producción editorial no son transferidos a los/las autores/as.
En ese sentido, no existe costo alguno para los/as autores/as en el envío o durante el proceso editorial, defendiendo el derecho a la información con equidad e iguales oportunidades de acceso.
Licencia y derechos de autor/a
Los autores conservan todos los derechos de publicación del artículo y conceden a la Revista Ciencias Sociales una licencia no exclusiva, intrasferible y sin regalías por duración ilimitada para su reproducción, distribución y comunicación pública a nivel mundial bajo una Licencia Creative Commons Atribución 4.0 Internacional (CC BY NC 4.0)