ZUR INVASION AKADEMISCHER ELFENBEINTÜRME 
...

This page is an experiment.

Here I try to arrange references that I have used in papers thematically.
Below are all references from my paper ' On pitfalls (and advantages) of sophisticated Large Language Models.' Listed alphabetically and tagged with keywords ...


References

topic / keyword


Al-Sibai, N. (2022). Facebook Takes Down AI That Churns Out Fake Academic Papers After Widespread Criticism. The Byte. https://futurism.com/the-byte/facebook-takes-down-galactica-ai

failure of Galactica


AlphaTensor. GitHub deepmind / alphatensor. https://github.com/deepmind/alphatensor

technical


Alshemali, B. & Kalita, J. (2020). Improving the Reliability of Deep Neural Networks in NLP: A Review. Knowledge-Based Systems, 191, 105210. doi:10.1016/j.knosys.2019.105210

reliability


Ardila, D., Kiraly, A. P., Bharadwaj, S., Choi, B., Reicher, J. J., Peng, L., Tse, D., Etemadi, M., Ye, W., Corrado, G., Naidich, D. P., & Shetty, S. (2019). End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nature medicine, 25(6), 954–961. doi: 10.1038/s41591-019-0447-x

application


Assael, Y., Shillingford, B., Whiteson, S., & Freitas, N. (2016). LipNet: Sentence-level Lipreading. doi:10.48550/arXiv.1611.01599

application


Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623. doi.org/10.1145/3442188.3445922

risks


BERT: Official GitHub repository. https://github.com/google-research/bert

technical


Bosio, A., Bernardi, P., Ruospo, & Sanchez, E. (2019). A Reliability Analysis of a Deep Neural Network. 2019 IEEE Latin American Test Symposium (LATS), 1-6. doi:10.1109/LATW.2019.8704548

reliability


Brooker, C. (2013). Black Mirror: Be Right Back (Season 2, Episode 1) [movie]. Zeppotron.

SciFi


Brown, E. (2020). Contributor on AI or human lyrics: Could you tell which is which? https://www.zdnet.com/article/ai-or-human-lyrics-could-you-tell-which-is-which

distinguishability


Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877–1901. doi:10.48550/arXiv.2005.14165

technical


Brown, N. & Sandholm, T. (2019). Superhuman AI for multiplayer poker. Science. 365. doi:10.1126/science.aay2400

application


Brownlee, J. (2019). A gentle introduction to early stopping to avoid overtraining neural networks. Machine learning mastery. https://machinelearningmastery.com/early-stopping-to-avoid-overtraining-neural-network-models

technical


Bryson, J. (2022). One Day, AI Will Seem as Human as Anyone. What Then. Wired. https://www.wired.com/story/lamda-sentience-psychology-ethics-policy

risks


Campbell, M., Hoane Jr, A. J., & Hsu, F. H. (2002). Deep blue. Artificial intelligence, 134(1-2), 57-83.

application


Chiang, T. (2023) ChatGPT Is a Blurry JPEG of the Web. The New Yorker. https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

opinion


Chowdhery A, Narang S, Devlin J. PaLM: Scaling Language Modeling with Pathways. Google AI Blog. 2022. Available from: https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html

technical


Cukier, K. (2022). Babbage: Could artificial intelligence become sentient? The Economist. https://shows.acast.com/theeconomistbabbage/episodes/babbage-could-artificial-intelligence-become-sentient

opinion


DALL-E. https://openai.com/blog/dall-e/

technical


Daly, R. (2021). AI software writes new Nirvana and Amy Winehouse songs to raise awareness for mental health support. NME. https://www.nme.com/news/music/ai-software-writes-new-nirvana-amy-winehouse-songs-raise-awareness-mental-health-support-2913524

application


Davis, E., Hendler, J., Hsu, W., Leivada, E., Marcus, G., Witbrock, M., Shwartz, V., & Ma, M. (2023). ChatGPT/LLM error tracker. https://researchrabbit.typeform.com/llmerrors?typeform-source=garymarcus.substack.com

failures


Daws, R. (2020). Medical chatbot using OpenAI's GPT-3 told a fake patient to kill themselves. AI News. https://www.artificialintelligence-news.com/2020/10/28/medical-chatbot-openai-gpt3-patient-kill-themselves/

risks


Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. doi:10.48550/arXiv.1810.04805

technical


Eleuther AI. https://www.eleuther.ai

technical


Elgammal, A. (2021). How a team of musicologists and computer scientists completed Beethoven’s unfinished 10th symphony. The Conversation. https://theconversation.com/how-a-team-of-musicologists-and-computer-scientists-completed-beethovens-unfinished-10th-symphony-168160

application


European Commission (21.4.2021). AI-act. Proposal for a regulation of the European parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. https://artificialintelligenceact.eu/the-act/

regulation


Fawzi, A. et al. (2022). Discovering novel algorithms with AlphaTensor. https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor?utm_campaign=AlphaTensor&utm_medium=bitly&utm_source=Twitter+Organic

application


Frankish, K. (2022). Some thoughts on LLMs. Blog post at The tricks of the Mind (2 Nov) https://www.keithfrankish.com/blog/some-thoughts-on-llms

opinion


Gangadharbatla, H. (2022). The Role of AI Attribution Knowledge in the Evaluation of Artwork. Empirical Studies of the Arts, 40(2), 125–142. https://doi.org/10.1177/0276237421994697

distinguishability (pictures)


GitHub Copilot. https://docs.github.com/en/copilot

technical


Government UK consultations (2021). Artificial intelligence call for views: copyright and related rights. https://www.gov.uk/government/consultations/artificial-intelligence-and-intellectual-property-call-for-views/artificial-intelligence-call-for-views-copyright-and-related-rights

regulations


GPT-3. https://github.com/openai/gpt-3

technical


Groh, M. Epstein, Z., Firestone & Picard, R. (2021). Deepfake Detection by Human Crowds, Machines, and Machine-Informed Crowds. Proceedings of the National Academy of Sciences, 119(1). https://doi.org/10.48550/arXiv.2105.06496

distinguishability (video)


Guardian editorial (10 Feb 2023). The Guardian view on ChatGPT search: exploiting wishful thinking. The Guardian. https://www.theguardian.com/commentisfree/2023/feb/10/the-guardian-view-on-chatgpt-search-exploiting-wishful-thinking?CMP=share_btn_link

risks


Hadjeres, G., Pachet, F., & Nielsen, F. (2017). DeepBach: a steerable model for Bach chorales generation. Proceedings of the 34th International Conference on Machine Learning, 1362-1371.

application


Heaven, W. (2020). Open AI’s new language generator GPT-3 is shockingly good – and completely mindless. MIT Technological Review. https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/

opinion


Herman, D. (2022). The end of high school English. The Atlantic. https://www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-writing-high-school-english-essay/672412

risks
(authorship)


Hofstadter, D. (2022, June 9). Artificial neural networks today are not conscious, according to Douglas Hofstadter. The Economist. https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-today-are-not-conscious-according-to-douglas-hofstadter

opinion
(consciousness)


Hoppenstedt, M. (2022, August 11). Russische Komiker zeigen Ausschnitt von Giffey-Gespräch mit Fake-Klitschko. SPIEGEL. https://www.spiegel.de/netzwelt/web/franziska-giffey-russische-komiker-zeigen-ausschnitt-von-gespraech-mit-fake-klitschko-a-527ab090-2979-4e70-a81c-08c661c0ef62

distinguishability (video)


Hutson, M. (2022). Could AI help you to write your next paper? Nature 611, 192–193. doi:10.1038/d41586-022-03479-w

risks
(authorship)


Huang, K. (2023). Alarmed by A.I. chatbots, universities start revamping how they teach. The New York Times. https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html

risks
(authorship)


Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., Bridgland, A., Meyer, C., Kohl, S., Ballard, A. J., Cowie, A., Romera-Paredes, B., Nikolov, S., Jain, R., Adler, J., Back, T., … Hassabis, D. (2021). Highly accurate protein structure prediction with AlphaFold. Nature, 596(7873), 583–589. doi: 10.1038/s41586-021-03819-2

application


Karpus, J. & Strasser, A. (under review). Persons and their digital replicas.



Kirchenbauer, J., Geiping, J., Wen, Y., Katz, J., Miers, I., & Goldstein, T. (2023). A Watermark for Large Language Models. doi:10.48550/arXiv.2301.10226

distinguishability (detection alg.)


Klein, E. (2022, June 19). This is a weirder moment than you think. The New York Times. https://www.nytimes.com/2022/06/19/opinion/its-not-the-future-we-cant-see.htmlMetz, R. (2022, September 3). AI won an art contest, and artists are furious. CNN Business. https://edition.cnn.com/2022/09/03/tech/ai-art-fair-winner-controversy/index.html

distinguishability (pictures)


Krakauer, D. & Mitchell, M. (2022). The Debate Over Understanding in AI’s Large Language Model. doi:10.48550/arXiv.2210.13966

opinion
(understanding)


Lionbridge (2023). What ChatGPT gets right and wrong and why it’s probably a game-changer for the localization industry. https://www.lionbridge.com/content/dam/lionbridge/pages/whitepapers/whitepaper-what-chatgpt-gets-right-and-wrong/chatgpt-whitepaper-english.pdf

opinion


Lock, S. (2022). What is AI chatbot phenomenon ChatGPT and could it replace humans? The Guardian. https://www.theguardian.com/technology/2022/dec/05/what-is-ai-chatbot-phenomenon-chatgpt-and-could-it-replace-humans

risks
(replace humans)


Mahian, O., Treutwein, M., Estellé, P., Wongwises, S., Wen, D., Lorenzini, G., … Sahin, A. (2017). Measurement of similarity in academic contexts. Publications, 5(3), 18, doi:10.3390/publications5030018

distinguishability


Mahowald, K., Ivanova, A. A., Blank, I. A., Kanwisher, N., Tenenbaum, J. B., & Fedorenko, E. (2023). Dissociating language and thought in large language models: a cognitive perspective. https://arxiv.org/abs/2301.06627

opinion
(formal & functional linguistic)


Marche, S. (2022). Will ChatGPT kill the student essay? The Atlantic. https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/

risks
(authorship)


Marcus, G. & Davis, E. (2020). GPT-3, Bloviator: OpenAI's language generator has no idea what it's talking about". MIT Technology Review.

opinion
(understanding)


Marcus, G., & Davis, E. (2023). Large language models like ChatGPT say the darnedest things. Blog post at The Road to AI We Can Trust (Jan 10). https://garymarcus.substack.com/p/large-language-models-like-chatgpt

risks
(trust)


Marcus, G. (2022). AI platforms like ChatGPT are easy to use but also potentially dangerous. Scientific American. https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous

risks


Marcus, G. (2023). Inside the Heart of ChatGPT’s Darkness. Blog post at The Road to AI We Can Trust (Feb 11) https://garymarcus.substack.com/p/inside-the-heart-of-chatgpts-darkness?utm_source=substack&utm_medium=email

risks


McQuillan, D. (2023). ChatGPT Is a Bullshit Generator Waging Class War. Vice. https://www.vice.com/en/article/akex34/chatgpt-is-a-bullshit-generator-waging-class-war

opinion


Michael, J., Holtzman, A., Parrish, A., Mueller, A., Wang, A., Chen, A., ... & Bowman, S. R. (2022). What do NLP researchers believe? Results of the NLP community metasurvey. doi:10.48550/arXiv.2208.12852

survey


Mitchell, E., Lee, Y., Khazatsky, A., Manning, C.D., & Finn, C. (2023). DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature.
https://doi.org/10.48550/arXiv.2301.11305

distinguishability (detection alg.)


Müller, N., Pizzi, K. & Williams, J. (2022). Human Perception of Audio Deepfakes. In Proceedings of the 1st International Workshop on Deepfake Detection for Audio Multimedia (DDAM '22). Association for Computing Machinery, New York, NY, USA, 85–91. https://doi.org/10.1145/3552466.3556531

distinguishability
(audio)


Murphy, M. (2019). This app is trying to replicate you. Quartz. https://qz.com/1698337/replika-this-app-is-trying-to-replicate-you/

application
(Replika)


Nakagawa, H. & Orita, A. (2022). Using deceased people’s personal data. AI & Society. doi:10.1007/s00146-022-01549-1. https://link.springer.com/article/10.1007/s00146-022-01549-1

risks
(privacy)


Perrigo, B. (2023). Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. The Times. https://time.com/6247678/openai-chatgpt-kenya-workers

risks
(ethic)


Peritz, A. (2022, Sept 06). A.I. Is Making It Easier Than Ever for Students to Cheat. Slate. https://slate.com/technology/2022/09/ai-students-writing-cheating-sudowrite.html

risks
(authorship)


Rajnerowicz, K. (2022).·Human vs. AI Test: Can We Tell the Difference Anymore? Statistics & Tech Data Library. https://www.tidio.com/blog/ai-test

distinguishability


Roberts, M. (2022). Is Google’s LaMDA artificial intelligence sentient? Wrong question. The Washington Post. https://www.washingtonpost.com/opinions/2022/06/14/google-lamda-artificial-intelligence-sentient-wrong-question

opinion
(sentience)


Robertson, A. (2022). The US Copyright Office says an AI can’t copyright its art. The Verge. https://www.theverge.com/2022/2/21/22944335/us-copyright-office-reject-ai-generated-art-recent-entrance-to-paradise

regulation
(copyright)


Rodriguez, S. (2022). Chomsky vs. Chomsky. http://opendoclab.mit.edu/presents/ch-vs-ch-prologue-sandra-rodriguez

art project


Rogers, A. (2023). The new Bing is acting all weird and creepy — but the human response is way scarier. Insider. https://www.businessinsider.com/weird-bing-chatbot-google-chatgpt-alive-conscious-sentient-ethics-2023-2

opinion


Rogers, A., Kovaleva, O., Rumshisky, A. (2020). A Primer in BERTology: What we know about how BERT works. doi:10.48550/arXiv.2002.12327

technical


Roose, Kevin (December 5, 2022). The Brilliance and Weirdness of ChatGPT. The New York Times.

opinion


Schwitzgebel, E. (2022). GPT-3 Can Talk Like the Philosopher Daniel Dennett Without Parroting His Words. Blog post at The Splintered Mind (Nov 3).

opinion


Schwitzgebel, E. et al. (2023). Creating a Large Language Model of a Philosopher. doi:10.48550/arXiv.2302.01339

distinguishability
(survey)


Shanahan, M. (2023). Talking About Large Language Models.
https://doi.org/10.48550/arXiv.2212.03551

opinion


Silver, D., Huang, A. et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529, 484-489. doi:10.1038/nature16961

application


Silver, D., Hubert, T. et al. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362 (6419), 1140-1144. doi: 10.1126/science.aar6404

application


Simonite, T. (2020). Did a Person Write This Headline, or a Machine? Wired. https://www.wired.com/story/ai-text-generator-gpt-3-learning-language-fitfully

distinguishability
(text)


Sinapayen, L. (2023). Telling Apart AI and Humans #3: Text and humor https://towardsdatascience.com/telling-apart-ai-and-humans-3-text-and-humor-c13e345f4629

distinguishability


Sparrow, J. (2022, Nov 1).‘Full-on robot writing’: the artificial intelligence challenge facing universities. Guardian.https://www.theguardian.com/australia-news/2022/nov/19/full-on-robot-writing-the-artificial-intelligence-challenge-facing-universities

risks
(authorship)


Srivastava, A., Rastogi, A., Rao, A., Shoeb, A., Abid, A., Fisch, A.,..., & Shaham, U. (2022). Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. https://arxiv.org/abs/2206.04615

benchmark


Steven, J., & Iziev, N. (2022, April 15). A.I. Is Mastering Language. Should We Trust What It Says? The New York Times. https://www.nytimes.com/2022/04/15/magazine/ai-language.html

risks
(reliability)


Strasser, A., Crosby, M. & Schwitzgebel, E. (2023). How far can we get in creating a digital replica of a philosopher? In R. Hakli, P. Mäkelä, J. Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy 2022. Series Frontiers of AI and Its Applications, 366, 371-380. IOS Press, Amsterdam. doi:10.3233/FAIA220637

application


Strasser, A. (2022). From tool use to social interactions. In J. Loh & W. Loh (Ed.), Social robotics and the good life. Bielefeld: transcript Verlag. doi:10.1515/9783839462652-004

opinion
(tools or asocial agents)


Taylor, R., Kardas, M., Cucurull, G., Scialom, T., Hartshorn, A.S., Saravia, E., Poulton, A., Kerkez, V., & Stojnic, R. (2022). Galactica: A Large Language Model for Science.
https://doi.org/10.48550/arXiv.2211.09085

technical


Tiku, T. (2022, June 11). The Google engineer who thinks the company’s AI has come to life. The Washington Post. https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine

opinion
(sentience)


Thompson, D. (2022). Breakthroughs of the Year. The Atlantic. https://www.theatlantic.com/newsletters/archive/2022/12/technology-medicine-law-ai-10-breakthroughs-2022/672390

opinion


Thoppilan et al, (2022). LaMDA- Language Models for Dialog Applications. doi:10.48550/arXiv.2201.08239

technical


Thorp, H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313-313. doi:10.1126/science.adg7879

risks
(authorship)


Vincent, J. (2022). Top AI conference bans use of ChatGPT and AI language tools to write academic papers. The Verge. https://www.theverge.com/2023/1/5/23540291/chatgpt-ai-writing-tool-banned-writing-academic-icml-paper

risks
(authorship)


Vota, W. (2020). Bot or Not: Can You Tell What is Human or Machine Written Text? https://www.ictworks.org/bot-or-not-human-machine-written/#.Y9VO9hN_oRU

distinguishability
(text)


Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P., …, Gabriel, I. (2021). Ethical and social risks of harm from Language Models. doi:10.48550/arXiv.2112.04359

risks
(ethic)


Weinberg, J. (ed.) (2020). Philosophers On GPT-3 (updated with replies by GPT-3). Daily Nous. https://dailynous.com/2020/07/30/philosophers-gpt-3

opinion
(performance)


Wiggers, K. (2022). OpenAI's attempts to watermark AI text hit limits. TechCrunch. https://techcrunch.com/2022/12/10/openais-attempts-to-watermark-ai-text-hit-limits

distinguishability
(watermark)


Wolfram, S. (2023). What Is ChatGPT Doing … and Why Does It Work. https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work

technical (intro)


Wykowska, A.; Chaminade, T.; Cheng, G. (2016). Embodied artificial agents for understanding human social cognition. Philosophical transactions of the Royal Society of London. Series B, Biological sciences 371 (1693), 20150375. doi:10.1098/rstb.2015.0375

opinion
(embodiement)