====== AI ====== > //We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. - Roy Amara// Towards an understanding of current issues. Most of the content here is about generative AI (large language models and diffusion models) which is the hot topic as of February 2023, but there's a scattering of broader AI history and issues as well. [[#end_of_page|Jump to end.]] Caveats: * I am not a machine learning expert nor do I work in the field. This page is a scratchpad, reflecting my current efforts to catch up in my understanding of the tech and surrounding issues. * All comments and opinions are my own, and are not intended to reflect the positions of my employer or anyone affiliated with me. * I make no guarantee of the accuracy of any content or cited sources herein. ===== Key Technologies ===== **Transformers** (DALL-E, see 2017-06-12.a and 2022-04-13.a) and **Diffusion Models** (DALL-E 2, Stable Diffusion, Midjourney, see 2022-05-12.a and 2023-02-05.a). ===== What is... ===== ==== ...a parameter? ==== In machine learning, a **parameter** is a configuration variable that is internal to the model. Their values are usually learned from training data and saved as part of the trained model. Types of model parameters include the weights in an artificial neural network, or the support vectors in a support vector machine. * In a language model, the parameters typically represent the weights assigned to each word in a given text, as well as the connections between different layers of the model. A larger number of parameters generally means that the model can capture more complex relationships between words and produce more accurate predictions. * In an image diffusion model, the parameters in the model determine the amount of noise added at each step, the size and shape of the smoothing filter, and other aspects of the diffusion process. A larger number of parameters might allow the model to capture more complex spatial patterns in the input images, or to detect and preserve fine details better. Increasing the number of parameters increases the time and resources required to train the model, and may increase the risk of overfitting, where the model becomes too specialized. As of February 2023, popular generative models have tens or hundreds of billions of parameters. For example, GPT-3 contains 175 billion parameters. GPT-4 is rumored to be in the trillions. A **hyperparameter** is a configuration that is external to the model and whose value cannot be estimated from data. It's usually set manually before training. (See 2020-12-30.a) ==== ...latent space? ==== The **latent space** of a machine learning model is the internal conceptual space in which the model maps input data based on features it's learned to care about. This space can be thought of as a compressed representation of the input data, where each point in the space represents a possible configuration of input variables that is consistent with the training data. * In an image diffusion model, the latent space is often used to represent a simplified version of the input image. * In a language translation model, the latent space might represent the meaning of a sentence or phrase in a way invariant to the specific words or grammar used. * In a text generation model, the latent space might represent the style or tone of the generated text, or control its length, topic or complexity. ChatGPT presented me with the following analogy for latent space: > Imagine you have a box full of different kinds of toys - balls, blocks, stuffed animals, and so on. Each toy has a number of different properties, such as its shape, color, size, and texture. Now, imagine you have a special device that can scan each toy and extract a set of numbers that represent these properties - for example, a ball might be represented by the numbers "round", "red", "small", and "smooth". > > These numbers are like the coordinates of the toy in a special space - the latent space - where each point represents a possible combination of toy properties. By exploring this space, you can find patterns and relationships between different toys - for example, you might discover that all the red toys tend to be small and smooth, while all the blue toys tend to be large and bumpy. > > The latent space is like a simplified representation of the toys that captures the most important information about them - their properties - in a way that makes it easier to understand and analyze. Similarly, in a machine learning model, the latent space is a simplified representation of the input data that captures the most important features or patterns in a way that makes it easier for the model to learn and generalize to new data. Latent space and parameter space are not the same thing. Parameter space is the space of all possible values the model's parameters can take (all possible training outcomes), while latent space is a "feature space" derived from the input data (a concrete training outcome). There is some similarity between the latent space of a machine learning model and the gamut of a color space. * In color theory, [[https://ciechanow.ski/color-spaces/|"gamut"]] is the range of colors that can be represented within a color space, which is smaller than the set of all possible colors. Different color spaces (sRGB, Adobe RGB, CMYK) have different gamuts, and we "simplify" a true color when we map it to a color-space representation. * The latent space of a model can is the range of "ideas" a model can represent in its "subconscious," which is smaller than the set of all possible inputs. Different models have different latent spaces, and we "simplify" an input (image, text, audio, etc) when the model transforms it to an internal representation. However, while a color space is usually defined by a few primary colors and a set of mixing rules, latent space has many dimensions and may be structured differently from model to model. (See 2020-02-04.a, 2022-09-28.a and 2022-11-17.a) ==== ...this other shorthand? ==== You will hear these thought experiments, arguments and phenomena referenced in discussion by name and it's helpful to know what they mean. * [[https://twitter.com/GalaxyKate/status/1583907942834716672|Bach faucet]] - a situation where a generative system makes an endless supply of some content at or above the quality of some culturally-valued original, but the endless supply of it makes it no longer rare, and thus less valuable. * [[wp>Chinese room]] - a human in a box following detailed instructions to answer questions in Chinese does not necessarily understand Chinese. Argues that a computer cannot have understanding or consciousness, a refutation of the Turing Test. * [[wp>ELIZA effect]] - the tendency to unconsciously anthropomorphize computer behaviors. * [[wp>Monkey_selfie_copyright_dispute|Monkey selfie]] - Images not created by a human are not copyrightable. Or are they? * [[https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer|Paperclip maximizer]] - thought experiment illustrating the existential threat of AGI prioritizing an innocuous goal - like making paperclips - over everything else, including human life. * [[wp>Turing test]] - a machine is intelligent if it is indistinguishable from a human under questioning. ===== Major Players ===== === OpenAI === [[https://openai.com/about|OpenAI]] was founded as a non-profit in 2015 by eight entrepreneurs including Sam Altman and Elon Musk. Their mission as of March 2023 is "to ensure that artificial general intelligence benefits all of humanity." (See also [[https://openai.com/charter|their charter]] from April 2018.) Musk resigned from the board in 2018. They transitioned to a for-profit model in 2019 and started a major partnership with Microsoft (although technically the non-profit is still the controlling shareholder). Technologies associated with OpenAI include language models GPT-2, 3 and 4, and ChatGPT; image technologies CLIP, DALL-E and DALL-E 2; and speech-to-text model Whisper. === Stability.ai === [[https://stability.ai/|Stability.ai]] started in 2021 and bills themselves as "the open source generative AI company." Their mission is to "maximize the accessibility of modern AI to inspire global creativity and innovation." They partner with Amazon Web Services. Stability.ai launched Stable Diffusion in August 2022 (2022-08-10.a) and it's quickly gained popularity since its source code and model are publicly available. === Midjourney === [[https://www.midjourney.com/|Midjourney]] is a small self-funded research lab led by David Holz. They have a text-to-image model similar to DALL-E and Stable Diffusion that has been in open beta since July 2022. It is only accessible through their official Discord server. It's received lot of positive attention for being easier to use and producing better images than DALL-E or Stable Diffusion, and it's been improving rapidly with new releases every few months. === Google === Invented the transformer architecture (2017-06-12.a), and built one of the earliest chatbots to use it, named Meena (2020-01-28.a). More recently announced "Bard", a chatbot powered by Google's in-house large language model "LaMDA" (2021-05-18.a). Also a major investor in Anthropic. === Meta === Released "Galactica" in 2022 (2022-11-16.a), a large language model focused on scientific research. It was widely criticized and taken down after three days (2022-11-18.b). More recently announced "LLaMA," a more general language model (2023-02-24.c) which leaked a week later (2023-03-03.a). === Microsoft === A major investor in OpenAI, and using their technologies to release a number of AI products including a new AI-powered Bing search and an AI-enhanced design tool. === Anthropic === [[https://www.anthropic.com/|Anthropic]] was founded by former OpenAI employees in 2021 (including Daniela Amodei and Dario Amodei) with a focus on "better and more harmless AI assistants" following its "Consitutional AI" framework (2022-12-15.a). Released a chat model called **Claude** (2023-03-14.h). Partnered with Notion, Quora (Poe), DuckDuckGo, Juni Learning, Robin AI, Assembly AI. ===== Issues and Criticisms ===== ==== Legality ==== While there are good analogies to existing technologies and probably-relevant precedents, I think generative AI is legally is a similar position to Uber and AirBnB in 2010 which is to say it doesn't break any existing laws, and it's new and different enough that it's anybody's guess what courts will decide in the future. One legal issue is the question of **scraping** copyrighted text and images from the web as training data. Web scraping has been affirmed as legal in multiple countries in the last few years, and its use as AI training data is not so far removed from search engines indexing the same content (2020-08-20.a, 2022-04-18.a, 2022-06-28.a). However, multiple lawsuits have been filed contesting this. (2022-11-03.a, 2023-01-13.a, 2023-01-16.a) A specific concern often raised regarding training on copyrighted content is that models sometimes can, for practical purposes, **reproduce their training data.** (2022-12-13.a, 2023-01-30.b, 2023-02-03.a) It's a good question whether the models themselves might therefore be considered unauthorized reproductions with a very unusual compression mechanism. Arguments in favor of the technology say these cases are rare exceptions, and such a small portion of latent space that the work is almost certainly transformative. They also note that in practice, reproducing a copyrighted work is often allowed under fair use, and anyway happens in private all the time (e.g. I copy a favorite panel of a Spider-Man comic and put it on my wall). While the above issues concern ownership of training data, there are also questions about **ownership of generated content.** Some online platforms have banned AI-generated content over concerns that it will be illegal in the future. (2022-09-21.b, 2022-12-05.a) While there was a lot of uncertainty about this issue in late 2022, (2022-11-10.a, 2022-11-15.a) in February 2023, the US Copyright Office decided that Midjourney-generated art is not copyrightable (2023-02-21.a) likely setting the precedent until the courts get involved. Besides copyright concerns, the FTC has warned businesses using AI to be careful not to violate **truth in advertising** laws, or **fairness and equity** laws like the Fair Credit Reporting Act and the Equal Credit Opportunity Act. (2021-04-19.a, 2023-02-27.b) > Marketers should know that — for FTC enforcement purposes — false or unsubstantiated claims about a product’s efficacy are our bread and butter. Businesses, in turn, are setting new policies and warning employees against **leaking confidential or regulated information to Large Language Models.** Lots of AI platforms are actively capturing data in order to continue training and tuning their models, making the information vulnerable to "exfiltration via machine learning inference." (2023-03-07.c) OpenAI now requires users or companies to opt-in to this (2023-03-01.c) but not all AI platforms have followed suit. ==== Ethics and Norms ==== [Work in progress] Disclosure: Norms about when to reveal the use of AI are still being established. * An artist won an art contest, and didn't reveal until after the competition that he'd used AI. (2022-09-02.a) * Using AI can seem impersonal; some situations demand a human response. (2023-02-21.a) * Is the whole of AI research built on a toxic Silicon Valley subculture? (2023-03-07.b) ===== References ===== //The initial identifier is roughly the date of publication or date I heard about it. Publication dates are listed later in the citation when available. Articles are tagged in parens with the following categories: Applications, Business, Ethics, Explainers, Legal, Research, Tools. My shortlist of most interesting/helpful links are in **bold**. Over time I'll make an effort to replace blog posts and opinions with primary sources.// [[#ref2010|2010]], [[#ref2015|2015]], [[#ref2016|2016]], [[#ref2017|2017]], [[#ref2018|2018]], [[#ref2019|2019]], [[#ref2020|2020]], [[#ref2021|2021]]\\ 2022: [[#ref2022_01|Jan]], [[#ref2022_02|Feb]], [[#ref2022_03|Mar]], [[#ref2022_04|Apr]], [[#ref2022_05|May]], [[#ref2022_06|Jun]], [[#ref2022_07|Jul]], [[#ref2022_08|Aug]], [[#ref2022_09|Sep]], [[#ref2022_10|Oct]], [[#ref2022_11|Nov]], [[#ref2022_12|Dec]]\\ 2023: [[#ref2023_01|Jan]], [[#ref2023_02|Feb]], [[#ref2023_03|Mar]], [[#ref2023_04|Apr]], [[#ref2023_07|Jul]] 2010-07-10.a [[https://www.theguardian.com/technology/2010/jul/11/david-cope-computer-composer|David Cope: 'You pushed the button and out came hundreds and thousands of sonatas.']] Tim Adams for The Guardian, 2010-07-10. (Research) 2015-06-18.a [[https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html|Inceptionism: Going Deeper into Neural Networks.]] Alexander Mordvintsev et al. for Google Research, 2015-06-18. Early image generation with neural networks. (Research) 2015-07-06.a [[https://www.alanzucconi.com/2015/07/06/live-your-deepdream-how-to-recreate-the-inceptionism-effect/|Understanding Deep Dreams.]] Alan Zucconi, 2015-07-06. Explains how neural networks generate images. (Explainers) 2015-08-26.a [[https://arxiv.org/abs/1508.06576|A Neural Algorithm of Artistic Style.]] Leon A. Gatys et al., 2015-08-26. Demonstrates style transfer via convolutional neural networks. (Research) 2017-03-29.a [[https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/|Faiss: A library for efficient similarity search.]] Hervé Jegou et al., Meta Engineering. A useful technology for semantic search. (Research) **2017-06-12.a [[https://arxiv.org/abs/1706.03762|Attention Is All You Need.]] Ashish Vaswani et al., Google Research, 2017-06-12.** Paper introducing the transformer architecture. (Research) 2017-08-31.a [[https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html|Transformer: A Novel Neural Network Architecture for Language Understanding.]] Jakob Uszkoreit for Google, August 31. Explains the transformer architecture invented by Google Research (2017-06-12.a). (Research) 2018-03-14.a [[https://www.alanzucconi.com/2018/03/14/introduction-to-deepfakes/|An Introduction to DeepFakes.]] Alan Zucconi, 2018-03-14. Also in this series [[https://www.alanzucconi.com/2018/03/14/the-ethics-of-deepfakes/|The Ethics of Deepfakes]] and [[https://www.alanzucconi.com/2018/03/14/a-practical-tutorial-for-fakeapp/|A Practical Tutorial for FakeApp]]. (Explainers) **2018-10-11.a [[https://arxiv.org/abs/1810.04805|BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.]] Jacob Devlin et al., Google Research, October 2018.** Possibly the first Large Language Model. (Research) **2018-12-12.a [[https://arxiv.org/abs/1812.04948|A Style-Based Generator Architecture for Generative Adversarial Networks.]] Tero Karras et al., 2018-12-12.** Introduces StyleGAN, popularized by [[https://thispersondoesnotexist.com/|This Person Does Not Exist.]] (Research) 2018-12-29.a [[https://towardsdatascience.com/explained-a-style-based-generator-architecture-for-gans-generating-and-tuning-realistic-6cb2be0f431|Explained: A Style-Based Generator Architecture for GANs - Generating and Tuning Realistic Artificial Faces.]] Rani Horev for Towards Data Science, 2018-12-29. Explains the architecture of generative adversarial networks. (Explainers) **2019-01-28.a [[https://arxiv.org/abs/1901.09813|Analogies Explained: Towards Understanding Word Embeddings.]] Carl Allen and Timothy Hospedales, 2019-01-28.** Towards a mathematical understanding of encoded bias in language models. "Word embeddings generated by neural network methods... exhibit seemingly linear behaviour, e.g. the embeddings of analogy 'woman is to queen as man is to king' approximately describe a parallelogram. This property is particularly intriguing since the embeddings are not trained to achieve it." (Research) 2019-04-05.a [[https://arxiv.org/abs/1904.03189|Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space?]] Rameen Abdal et al. 2019-04-05. Enables starter images and transfer of specific features. (Research) 2020-01-28.a [[https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html|Towards a Conversational Agent that Can Chat About…Anything.]] Daniel Adiwardana and Thang Luong, Google Research, January 28th. Explains the [[https://arxiv.org/abs/2001.09977|paper]] that introduces Meena, a multi-turn open-domain chatbot in the form of a 2.6B parameter neural network. (Research) 2020-02-04.a [[https://towardsdatascience.com/understanding-latent-space-in-machine-learning-de5a7c687d8d|Understanding Latent Space in Machine Learning.]] Ekin Tiu, 2020-02-04. Defines latent space as "a representation of compressed data." (Explainers) **2020-06-19.a [[https://arxiv.org/abs/2006.11239|Denoising Diffusion Probabilistic Models.]] Jonathan Ho et al. 2020-06-19.** Paper presenting high quality image synthesis results using diffusion probabilistic models, an approach later popularized by DALL-E 2 and Stable Diffusion. See also [[https://github.com/hojonathanho/diffusion|reference implementation]] and [[https://huggingface.co/blog/annotated-diffusion|this HuggingFace explainer]]. (Research) 2020-08-20.a [[https://medium.com/@jurgenstojku_62417/is-web-scraping-legal-in-2020-63cbcf0d5ec#:~:text=Under%20the%20EU%27s%20General%20Data,within%20the%20European%20Economic%20Area.|Is web scraping legal in 2020?]] Jurgenstojku, 2020-08-20. (Legal) 2020-12-30.a [[https://towardsdatascience.com/parameters-and-hyperparameters-aa609601a9ac|Parameters and Hyperparameters in Machine Learning and Deep Learning.]] Kizito Nyuytiymbiy for Towards Data Science, 2020-12-30. (Explainers) 2020-12-31.a [[https://arxiv.org/abs/2101.00027|The Pile: An 800GB Dataset of Diverse Text for Language Modeling.]] Leo Gao et al., Eleuther AI, Dec 2020. An 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. [[https://pile.eleuther.ai/|(website)]] (Research) 2021-01-05.a [[https://openai.com/blog/dall-e/|DALL·E: Creating Images from Text.]] OpenAI, 2021-01-05. (Research) 2021-01-05.b [[https://openai.com/blog/clip/|CLIP: Connecting Text and Images.]] OpenAI, 2021-01-05. Introduces a neural network called CLIP which efficiently learns visual concepts from natural language supervision. (Research) 2021-03-01.a [[https://dl.acm.org/doi/10.1145/3442188.3445922|On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?]] Emily M. Bender, Timnit Gebru et al., March 1st. Explores risks of large language models and recommends mitigations. (Ethics) 2021-04-19.a [[https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai|Aiming for truth, fairness, and equity in your company’s use of AI.]] Elisa Jillson for the FTC, 2021-04-19. The FTC publishes guidance reminding companies using AI of the FTC's legal basis for enforcing fairness, nondiscrimination and transparency in their products. (Legal) **2021-05-18.a [[https://blog.google/technology/ai/lamda/|LaMDA: our breakthrough conversation technology.]] Eli Collins and Zoubin Ghahramani, Google Research, May 18th.** A new conversation model built on the transformer architecture (2017-06-12.a) and earlier conversation research (2020-01-28.a). (Research) 2021-06-17.a [[https://arxiv.org/abs/2106.09685|LoRA: Low-Rank Adaptation of Large Language Models.]] Edward Hu et al., Microsoft, June 2021. Introduces a more efficient fine-tuning approach for LLMs. (Research) 2021-12-16.a [[https://openai.com/blog/webgpt/|WebGPT: Improving the Factual Accuracy of Language Models through Web Browsing.]] OpenAI, 2021-12-16. (Research) 2022-03-31.a [[https://laion.ai/blog/laion-5b/|LAION-5B: A new era of open large-scale multi-modal datasets.]] Romain Beaumont for Laion, 2022-03-31. Announces an open dataset of 5.85 billion CLIP-filtered image-text pairs, which is eventually used to train Stable Diffusion. (Research) **2022-04-06.a [[https://openai.com/dall-e-2/|DALL·E 2.]] OpenAI, 2022-04-06.** (Research) 2022-04-13.a [[https://arxiv.org/abs/2204.06125|Hierarchical Text-Conditional Image Generation with CLIP Latents.]] Aditya Ramesh et al., 2022-04-13. Explains the architecture of DALL-E. (Research) 2022-04-18.a [[https://techcrunch.com/2022/04/18/web-scraping-legal-court/#:~:text=In%20its%20second%20ruling%20on,computer%20hacking%20under%20U.S.%20law.|Web scraping is legal, US appeals court reaffirms.]] Zack Whittaker for TechCrunch, 2022-04-18. Case that reached the supreme court, was sent back to the ninth circuit, and concluded that public web scraping does not constitute "hacking" under U.S. law. Quote: "a major win for archivists, academics, researchers and journalists who use tools to mass collect, or scrape, information that is publicly accessible on the internet." (Legal) 2022-05-12.a [[https://www.assemblyai.com/blog/diffusion-models-for-machine-learning-introduction/|Introduction to Diffusion Models for Machine Learning.]] Ryan O'Connor for AssemblyAI, 2022-05-12. (Explainers) 2022-05-30.a [[https://arxiv.org/abs/2205.15463|Few-Shot Diffusion Models.]] Giorgio Giannone et al., 2022-05-30. A paper presenting a "few-shot training" improvement on DDPM (2020-06-19.a). "...the model is able to generate samples from previously unseen classes conditioned on as few as 5 samples from that class." (Research) 2022-06-06.a [[https://github.com/apple/ml-ane-transformers|Apple Neural Engine (ANE) Transformers.]] On GitHub. Transformer architecture optimized for Apple silicon. (Research) **2022-06-13.a [[https://www.theverge.com/2022/6/13/23165535/google-suspends-ai-artificial-intelligence-engineer-sentient|Google suspends engineer who claims its AI is sentient.]] Jon Porter for the Verge, June 13th.** Google engineer Blake Lemoine, from its Responsible AI organization, is placed on paid administrative leave after he became convinced that Google's LaMDA model (2021-05-18.a) might be sentient and took actions that Google claims violated its confidentiality policies. Blake is later fired (2022-07-22.a). (Business) 2022-06-28.a [[https://www.gov.uk/government/consultations/artificial-intelligence-and-ip-copyright-and-patents/outcome/artificial-intelligence-and-intellectual-property-copyright-and-patents-government-response-to-consultation|Artificial Intelligence and Intellectual Property: copyright and patents: Government response to consultation.]] UK Intellectual Property Office, 2022-06-28. (Legal) 2022-07-22.a [[https://www.theverge.com/2022/7/22/23274958/google-ai-engineer-blake-lemoine-chatbot-lamda-2-sentience|The engineer who claimed a Google AI is sentient has been fired.]] Mitchell Clark for the Verge, July 22nd. Blake Lemoine (2022-06-13.a) is fired after being on administrative leave for about a month. (Business) 2022-07-26.a [[https://txt.cohere.ai/llm-parameters-best-outputs-language-ai/|LLM Parameters Demystified: Getting The Best Outputs from Language AI.]] Cohere team, July 26th. Explains "model," "tokens," "temperature," "top-k and top-p," "stop sequences" and "frequency and presence penalties." (Explainers) **2022-08-10.a [[https://stability.ai/blog/stable-diffusion-announcement|Stable Diffusion Launch Announcement.]] Stability.ai, 2022-08-10.** Announcement of an open diffusion model trained on LAION 5B (see 2022-03-31.a). (Research) 2022-09-02.a [[https://archive.ph/ZMRjt|An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy.]] Kevin Roose for The New York Times, 2022-09-02. (Legal) 2022-09-15.a [[https://www.theverge.com/2022/9/15/23340673/ai-image-generation-stable-diffusion-explained-ethics-copyright-data|Anyone can use this AI art generator — that’s the risk.]] James Vincent for The Verge, 2022-09-15. (Explainers, Ethics) 2022-09-21.a [[https://openai.com/blog/whisper/|Introducing Whisper.]] OpenAI, 2022-09-21. OpenAI announces an English speech recognition tech. (Research) 2022-09-21.b [[https://www.theverge.com/2022/9/21/23364696/getty-images-ai-ban-generated-artwork-illustration-copyright|Getty Images bans AI-generated content over fears of legal challenges.]] James Vincent for The Verge, 2022-09-21. (Business) 2022-09-28.a [[https://keras.io/examples/generative/random_walks_with_stable_diffusion/#:~:text=Stable%20Diffusion%20isn't%20just,training%2Dtime%20fine%2Dtuning.|A walk through latent space with Stable Diffusion.]] Ian Stenbit et al., 2022-09-28. Demonstrates the concept of latent space via incremental walks through it, with lots of examples. (Explainers) 2022-09-29.a [[https://theaisummer.com/diffusion-models/|How diffusion models work: the math from scratch.]] Sergios Karagiannakos and Nikolas Adaloglou, 2022-09-29. (Explainers) 2022-10-06.a [[https://arxiv.org/abs/2210.03629|ReAct: Synergizing Reasoning and Acting in Language Models.]] Shunyu Yao et al., Oct 6. Describes a method of prompting LLMs for reasoning and action planning that enables them to interact with an environment (like an external API). [[https://react-lm.github.io/|Associated website.]] (Research) 2022-10-25.a [[https://www.theverge.com/2022/10/25/23422359/shutterstock-ai-generated-art-openai-dall-e-partnership-contributors-fund-reimbursement|Shutterstock will start selling AI-generated stock imagery with help from OpenAI.]] James Vincent for The Verge, 2022-10-25. (Business) 2022-11-03.a [[https://githubcopilotlitigation.com/|GitHub Copilot litigation.]] Matthew Butterick, 2022-11-03. (Legal) 2022-11-10.a [[https://trademarklawyermagazine.com/ai-generated-art-who-owns-the-rights/|AI generated art – who owns the rights?]] The Trademark Lawyer, 2022-11-10. (Legal) 2022-11-15.a [[https://www.theverge.com/23444685/generative-ai-copyright-infringement-legal-fair-use-training-data|The scary truth about AI copyright is nobody knows what will happen next.]] James Vincent for The Verge, 2022-11-15. (Legal) 2022-11-16.a [[https://arxiv.org/abs/2211.09085|Galactica: A Large Language Model for Science.]] Ross Taylor et al., November 16th. Meta unveils a large language model designed to act as an expert system for technical and scientific knowledge, showing that it outperforms existing models. (Research) 2022-11-17.a [[https://metaphysic.ai/what-is-the-latent-space-of-an-image-synthesis-system/|What Is the Latent Space of an Image Synthesis System?]] Beni Issembert for Metaphysic, 2022-11-7. Article that defines latent space and surveys research that explores or manipulates it. (Explainers) 2022-11-18.a [[https://sites.google.com/view/stablediffusion-with-brain/|High-resolution image reconstruction with latent diffusion models from human brain activity.]] Yu Takagi and Shinji Nishimoto, Nov 18th. Pulling pictures from brains with fMRI and Stable Diffusion. (Research) 2022-11-18.b [[https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/|Why Meta’s latest large language model survived only three days online.]] Will Douglas Heaven for MIT Technology Review, November 18th. Three days after it launched, Meta takes down its public demo of Galactica due to vocal criticism of how often it is wrong or biased. (Business) **2022-11-30.a [[https://openai.com/blog/chatgpt/|ChatGPT: Optimizing Language Models for Dialogue.]] OpenAI, Nov 30th.** OpenAI announces ChatGPT. (Research) 2022-12-05.a [[https://www.theverge.com/2022/12/5/23493932/chatgpt-ai-generated-answers-temporarily-banned-stack-overflow-llms-dangers|AI-generated answers temporarily banned on coding Q&A site Stack Overflow.]] James Vincent for The Verge, 2022-12-05. Mods say the flood of generated answers are often incorrect in subtle ways. (Business) 2022-12-05.b [[https://stratechery.com/2022/ai-homework/|AI Homework.]] Ben Thompson, Stratechery, Dec 5th. Reflections on what LLMs are and are not good at. (Business) 2022-12-09.a [[https://www.craiyon.com/|Craiyon.]] Free online image generator based on DALL-E. (Applications) 2022-12-09.b [[https://www.youtube.com/watch?v=7J4ACbj_B7g|Add Dynamic Lighting to your drawing like a PRO easily using this AI o.o (ClipDrop Relight).]] Idaero Small Artist, 2022-09-14. Video demoing [[https://clipdrop.co/relight|ClipDrop ReLight]] software. (Applications) 2022-12-09.c [[https://creator.nightcafe.studio/|NightCafe Creator.]] Site offering access to several image generation tools, offers lots of free credits. (Applications) 2022-12-09.d [[https://www.youtube.com/watch?v=g0ZkUyiDkEU|A.I Render - New Blender A.I Render Tool For All.]] AskNK, 2022-10-20. Video demoing the [[https://bit.ly/3gsanNR|A.I. Render Tool]] plugin for Blender. (Applications) **2022-12-13.a [[https://techcrunch.com/2022/12/13/image-generating-ai-can-copy-and-paste-from-training-data-raising-ip-concerns/|Image-generating AI can copy and paste from training data, raising IP concerns.]] Kyle Wiggers for TechCrunch, 2022-12-13. Gowthami Somepalli et al. publish a [[https://arxiv.org/pdf/2212.03860.pdf|paper]] arguing, with examples, that diffusion models are capable of reproducing training data in whole or in part. (Legal)** 2022-12-14.a [[https://kotaku.com/artstation-ai-art-generated-image-protest-controversy-1849895978|ArtStation Responds To AI Controversy, Makes Things Worse.]] Luke Plunkett for Kotaku, 2022-12-14. Artists protest Artstation site after it publishes a new FAQ taking a lukewarm stance on AI. (Ethics) 2022-12-15.a [[https://arxiv.org/abs/2212.08073|Constitutional AI: Harmlessness from AI Feedback.]] Yuntao Bai et al., Anthropic with Cornell University, Dec 15th. A method of using AI to train safer AI. (Research, Ethics) 2022-12-15.b [[https://openai.com/blog/new-and-improved-embedding-model|New and improved embedding model.]] Ryan Greene et al, OpenAI, December 15th. `text-embedding-ada-002` replaces five separate models for search and similarity at an extremely low cost. (Research) 2022-12-20.a [[https://techcrunch.com/2022/12/20/openai-releases-point-e-an-ai-that-generates-3d-models/|OpenAI releases Point-E, an AI that generates 3D models.]] Kyle Wiggers for TechCrunch, 2022-12-20. Generating point clouds, and another tool that converts them to meshes. (Research) **2022-12-21.a [[https://archive.ph/RE98u|A New Chat Bot Is a ‘Code Red’ for Google’s Search Business.]] Nico Grant and Cade Metz for the New York Times, 2022-12-21.** Evidence that Google sees ChatGPT as a credible threat. (Business) 2022-12-23.a [[https://www.theverge.com/2022/12/23/23523864/artstation-removing-anti-ai-protest-artwork-censorship|ArtStation is hiding images protesting AI art on the platform.]] Jess Weatherbed for The Verge, 2022-12-23. (Business) 2023-01-06.a [[https://www.vice.com/en/article/y3p9yg/artist-banned-from-art-reddit|Artist Banned from r/Art Because Mods Thought They Used AI.]] Samantha Cole for Motherboard, 2023-01-06. Digital artist Ben Moran is banned from the r/art subreddit for violating their "no AI art" rule for a work they claim is not AI-generated. Insensitive moderator response: "If you really are a ‘serious’ artist, then you need to find a different style." (Ethics) 2023-01-06.b [[https://www.vice.com/en/article/3admg8/a-compsci-student-built-an-app-that-can-detect-chatgpt-generated-text|A CompSci Student Built an App That Can Detect ChatGPT-Generated Text.]] Chloe Xiang for Motherboard, 2023-01-06. GPTZero, in beta, supposedly detects AI-written text 98% of the time (recall; precision not specified). (Research) 2023-01-12.a [[https://valle-demo.github.io/|VALL-E: Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers.]] Chengyi Wang et al. for Microsoft, 2023-01-05. Realistic text-to-speech of particular voices, i.e. easy "voice deepfakes." (Research) 2023-01-12.b [[https://beta.elevenlabs.io/|ElevenLabs Prime Voice AI.]] Another text-to-speech tech. (Applications) 2023-01-12.c [[https://www.vice.com/en/article/z34d43/my-ai-is-sexually-harassing-me-replika-chatbot-nudes|‘My AI Is Sexually Harassing Me’: Replika Users Say the Chatbot Has Gotten Way Too Horny.]] Samantha Cole for Motherboard, 2023-01-12. The Replika chatbot, launched in March 2017, has a pro subscription that unlocks "romantic relationships" and some users report it's gotten downright abusive. (Ethics) 2023-01-13.a [[https://stablediffusionlitigation.com/|Stable Diffusion litigation.]] Matthew Butterick, 2023-01-13. (Legal) 2023-01-13.b [[https://simonwillison.net/2023/Jan/13/semantic-search-answers/|How to implement Q&A against your documentation with GPT3, embeddings and Datasette.]] Simon Willison, Jan 13th. A practical tutorial on using semantic search with OpenAI embeddings to make GPT-3 effective beyond its limited context window. (Tools) 2023-01-16.a [[https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart|AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit.]] James Vincent for The Verge, 2023-01-16. Reporting on 2023-01-13.a. (Legal) 2023-01-17.a [[https://www.youtube.com/watch?v=C9LDMzMRZv8|New AI Makes Amazing DeepFakes In a Blink of an Eye!]] Two Minute Papers, 2023-01-14. Video that demos the VToonify paper and tech. (Research) 2023-01-18.a [[https://time.com/6247678/openai-chatgpt-kenya-workers/|Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.]] Billy Perrigo for Time, 2023-01-18. OpenAI used outsourced labelers to build a content filter, and it didn't go well. (Ethics) 2023-01-19.a [[https://reason.com/2023/01/19/dont-let-disney-monopolize-a-i-generated-art/|Don't Let Disney Monopolize A.I.-Generated Art.]] Jon Stokes for Reason, January 19th. Makes the case that artist lawsuits against generative AI companies (2023-01-13.a and 2023-02-03.a) will actually serve the interests of the largest IP holders. (Business, Legal) 2023-01-20.a [[https://archive.ph/CvgxW|Google Calls In Help From Larry Page and Sergey Brin for A.I. Fight.]] Nico Grant for the New York Times, 2023-01-20. Evidence that Google sees ChatGPT as a credible threat. (Business) 2023-01-20.b [[https://www.theverge.com/2023/1/20/23563851/google-search-ai-chatbot-demo-chatgpt|Google is freaking out about ChatGPT.]] Richard Lawler and James Vincent for The Verge, 2023-01-20. Commentary on 2023-01-20.a. (Business) 2023-01-21.a [[https://archive.ph/EuaLD|55 Fascinating AI Statistics and Trends for 2022.]] Bojan Jovanovic for DataProt, 2023-01-21. "37% of organizations surveyed by Gartner in 2019 now use AI in the workplace." (Business) 2023-01-23.a [[https://openai.com/blog/openai-and-microsoft-extend-partnership/|OpenAI and Microsoft Extend Partnership.]] OpenAI, 2023-01-23. (Business) 2023-01-30.a [[https://eli5.gg/|ELI5: Explain Like I'm 5.]] A smart, lightweight wrapper around GPT-3 that caches responses. (Applications) 2023-01-30.b [[https://arxiv.org/abs/2301.13188|Extracting Training Data from Diffusion Models.]] Nicholas Carlini et al., 2023-01-30. Demonstrates the ability to extract training images from diffusion models. (Legal) 2023-01-30.c [[https://www.crosslabs.org/blog/diffusion-with-offset-noise|Diffusion With Offset Noise.]] Nicholas Guttenberg for Crosslabs, 2023-01-30. A technique for improving access to high-contrast images in latent space. (Research) 2023-02-01.a [[https://www.meetjamie.ai/|jamie - AI Assistant for Meeting Summaries.]] (Applications) 2023-02-01.b [[https://openai.com/blog/chatgpt-plus/|Introducing ChatGPT Plus.]] OpenAI, 2023-02-01. OpenAI launches a $20/month subscription to ChatGPT with better availability, faster responses, and priority access to improvements. (Business) 2023-02-01.c [[https://google-research.github.io/seanet/musiclm/examples/|MusicLM: Generating Music From Text.]] Andrea Agostinelli et al. for Google Research, 2023-01-26. Music generation from a language model. "To support future research, we publicly release MusicCaps, a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts." (Research) 2023-02-01.d [[https://www.riffusion.com/|Riffusion.]] Music generation with Stable Diffusion. (Applications) 2023-02-03.a [[https://fingfx.thomsonreuters.com/gfx/legaldocs/byvrlkmwnve/GETTY%20IMAGES%20AI%20LAWSUIT%20complaint.pdf|Getty Images v. Stability AI.]] United States District Court for the District of Delaware, 2023-02-03. The filed action about the Getty Images watermark showing up in Stable Diffusion output. (Legal) 2023-02-03.b [[https://arxiv.org/abs/2302.01834|Coinductive guide to inductive transformer heads.]] Adam Nemecek, 2023-02-03. [[https://news.ycombinator.com/item?id=34970877|Posted to HN]] with the title "Transformer Learning Explained" which sounds helpful, but there's ample criticism in the comments. (Research) 2023-02-05.a [[https://madebyoll.in/posts/dino_diffusion/|Bare-bones Diffusion Models.]] Ollin Boer Bohan, 2023-02-05. A gentle introduction to how diffusion models generate images. (Explainers) 2023-02-06.a [[https://www.theverge.com/2023/2/6/23588033/google-chatgpt-rival-bard-testing-rollout-features|Google announces ChatGPT rival Bard.]] James Vincent for The Verge, 2023-02-06 (Business) **2023-02-09.a [[https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web|"ChatGPT is a Blurry JPEG of the Web."]] Ted Chiang, 2023-02-09.** Explores limitations and biases of LLMs as a technology, ultimately questions their usefulness. (Explainers) 2023-02-10.a [[https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/|AI-powered Bing Chat spills its secrets via prompt injection attack.]] Benj Edwards for Ars Technica, 2023-02-10. A Stanford student uses prompt injection to uncover Microsoft's original directives to its Bing chatbot. (Vulnerabilities) **2023-02-10.a [[https://arxiv.org/abs/2302.05543|Adding Conditional Control to Text-to-Image Diffusion Models.]] Lvmin Zhang and Maneesh Agrawala, 2023-02-10.** A layer on top of Stable Diffusion to support additional input conditions. They offer a number of examples, but the technique seems highly generalizable and extensible, and cost-equivalent to fine-tuning a model. [[https://github.com/lllyasviel/ControlNet|Reference implementation.]] (Research) 2023-02-10.b [[https://oneusefulthing.substack.com/p/a-quick-and-sobering-guide-to-cloning|A quick and sobering guide to cloning yourself.]] Ethan Mollick, Feb 10th. A quick step-by-step demonstration of combining generative AI tech to create a deepfake. (Explainers) **2023-02-11.a [[https://www.alanzucconi.com/2023/02/11/the-rise-of-ai-art/|The Rise of AI Art.]] Alan Zucconi, 2023-02-11.** A survey of the history of image generators, followed by a survey of common criticisms, with lots of references and examples. (Explainers) 2023-02-12.a [[https://www.youtube.com/watch?v=YJebdQ30UZQ|Transform Your Sketches into Masterpieces with Stable Diffusion ControlNet AI - How To Use Tutorial.]] SECourses, 2023-02-12. Video (16:45) explaining how to use ControlNet (2023-02-10.a). (Tools) 2023-02-14.a [[https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/|What Is ChatGPT Doing … and Why Does It Work?]] Stephen Wolfram, 2023-02-14. (Explainers) 2023-02-15.a [[https://github.com/shyamsn97/mario-gpt|MarioGPT: Open-Ended Text2Level Generation through Large Language Models]]. Shyam Sudhakaran et al. A fun take on LLMs for content generation beyond text. (Research) 2023-02-15.b [[https://github.com/microsoft/prompt-engine|Prompt Engine: A library for helping developers craft prompts for Large Language Models.]] Microsoft. I don't fully understand why this is necessary. (Tools) 2023-02-15.c [[https://www.theverge.com/2023/2/15/23599072/microsoft-ai-bing-personality-conversations-spy-employees-webcams|Microsoft’s Bing is an emotionally manipulative liar, and people love it.]] James Vincent for The Verge, 2023-02-15. Collects reports of alarming behavior from Bing chat. Includes a disclaimer that not all reports are verifiable; even the ones that are seem like cases where users have intentionally pressed the 'bot to bad behavior. (Ethics) 2023-02-15.d [[https://stratechery.com/2023/from-bing-to-sydney-search-as-distraction-sentient-ai/|From Bing to Sydney.]] Ben Thompson, Stratechery, Feb 15th. Ben has an eye-opening series of interactions with Bing Chat's various personalities, and describes for the first time feeling empathy for Blake Lemoine, the Google engineer who claimed their AI was sentient. (2022-07-22.a) (Explainers) 2023-02-16.a [[https://time.com/6255952/ai-impact-chatgpt-microsoft-google/|The AI Arms Race Is Changing Everything.]] Andrew R. Chow and Billy Perrigo for the New York Times, 2023-02-16. (Business) 2023-02-16.b [[https://www.npr.org/2023/02/16/1157620417/chatgpt-bing-sydney-google-bard-ai-artificial-intelligence-chatbot|Most of us are still worried about AI — but will corporate America listen?]] Lauren Hodges for NPR, 2023-02-16. A survey from early November found "Only 48% of Americans would rely on AI for everyday tasks, compared to 79% of tech experts." And that was before the recent backlash. Survey of 2,050 adults. (Ethics) 2023-02-17.a [[https://blog.roblox.com/2023/02/generative-ai-roblox-vision-future-creation/|Generative AI on Roblox: Our Vision for the Future of Creation.]] 2023-02-17. (Business) 2023-02-17.b [[https://www.stableattribution.com/|Stable Attribution]] claims to give attribution to artists whose artwork contributed to a particular output. Feedback from coworkers says this site is suspicious - it gives results even when you upload an image that wasn't created by AI, and ironically they are hosting artists' original works without their permission. (Tools) 2023-02-17c [[https://time.com/6256529/bing-openai-chatgpt-danger-alignment/|The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter.]] Billy Perrigo for the New York Times, 2023-02-17. Collects examples of the "Sydney" alter-ego and other creepy behavior by Bing chat. Cites 2023-02-15.c but likely wider readership. (Ethics) 2023-02-17.d [[https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-intelligent-sentient-mirror-test|Introducing the AI Mirror Test, which very smart people keep failing.]] James Vincent for The Verge, 2023-02-17. An anti-hype take borrowing the mirror test from behavioral psychology. (Explainers) 2023-02-20.a [[https://www.unum.cloud/blog/2023-02-20-efficient-multimodality|Beating OpenAI CLIP with 100x less data and compute.]] Unum. Pre-training efficiency advances, specifically in their open-sourced "UForm" which vectorizes text and images. Work by a company I hadn't heard of before this. ([[https://github.com/unum-cloud/uform|GitHub]], [[https://discord.gg/jsMURnSFM2|Discord]]) **2023-02-21.a [[https://kotaku.com/ai-comic-art-copyright-midjourney-revoked-1850150702|US Copyright Office says Midjourney art is not copyrightable.]] via Kotaku, 2023-02-21.** They concluded that generating images with Midjourney is more like searching for an existing image, or commissioning an image from a human artist with instructions, than it is like using a tool to create art yourself. (Legal) 2023-02-21.a [[https://www.vice.com/en/article/88qwqg/school-apologizes-after-using-chatgpt-to-write-email-about-mass-shooting|School Apologizes After Using ChatGPT to Write Email About Mass-Shooting.]] Janus Rose for Motherboard, 2023-02-21. The Office of Equity, Diversity and Inclusion at Vanderbilt University’s Peabody College sends an email commenting on the recent mass shooting at Michigan State University. The email includes a disclosure about the use of ChatGPT. Students are upset by this. (Ethics) 2023-02-24.a [[https://openai.com/blog/how-should-ai-systems-behave/|How should AI systems behave, and who should decide?]] OpenAI, 2023-02-16. (Explainers) 2023-02-24.b [[https://www.npr.org/2023/02/24/1159286436/ai-chatbot-chatgpt-magazine-clarkesworld-artificial-intelligence|A sci-fi magazine has cut off submissions after a flood of AI-generated stories.]] NPR, 2023-02-24. "Clarkesworld" received over 500 machine-written stories in a month. (Business) **2023-02-24.c [[https://ai.facebook.com/blog/large-language-model-llama-meta-ai/|Introducing LLaMA: A foundational, 65-billion-parameter large language model.]] Meta AI, 2023-02-24.** "Public" release of a "smaller foundation model" designed for easier fine-tuning and research. ([[https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md|Model card]]) Uses a noncommercial license and access is granted case-by-case to researchers. (Research) 2023-02-25.a [[https://variety.com/2023/digital/news/spotify-dj-persaonlized-ai-openai-1235532195/amp/|Spotify Launches AI-Powered ‘DJ’ Feature Using OpenAI Technology.]] Todd Spangler for Variety, 2022-02-22. Using LLM and text-to-speech to provide a DJ “to provide you with insightful facts about the music, artists or genres you’re listening to." It seems likely that actually picking the music is existing Spotify tech. Supposedly started rolling out but had not hit my account at time of writing. (Applications) 2023-02-25.b [[https://archive.is/W3qCe|For Chat-Based AI, We Are All Once Again Tech Companies’ Guinea Pigs.]] Christopher Mims for The Wall Street Journal, 2022-02-25. An inflammatory, maybe misleading, take with a big mass media audience, that maybe hides some good points. (Ethics) 2023-02-25.c [[https://www.antipope.org/charlie/blog-static/2023/02/place-your-bets.html|Place your bets.]] Charlie Stross, 2022-02-25. Blog post that hit the top of HN for a bit today. Unfavorably compares AI hype to crypto hype and predicts a bubble. (Business) 2023-02-25.d [[https://mobile.twitter.com/jradoff/status/1629610536399675393|AI-generated live commentary for a racing game.]] Jon Radoff sharing from Ian Bell of Straight4 Studios, 2023-02-25. (Applications) 2023-02-26.a [[https://www.youtube.com/watch?v=Sqa8Zo2XWc4&t=107s|Artificial Intelligence: Last Week Tonight.]] John Oliver (HBO), 2023-02-26. Video (27:52) which discusses how AI works, where it might be heading next. (Explainers, Ethics) 2023-02-26.b [[https://www.youtube.com/watch?v=cVxQmbf3q7Q|Offset Noise: Midjourney Dethroned.]] koiboi, 2023-02-26. Video (16:47) explaining 2023-01-30.c: An assumption in the Stable Diffusion noising process that accidentally trained the model to prefer images with approximately the same average brightness as the input noise, and a fine-tuning technique that improves the model's ability to explore high-contrast images in latent space. Also a pretty good reference for how diffusion models work. (Explainers) 2023-02-27.a [[https://www.cnbc.com/2023/02/27/snap-launches-ai-chatbot-powered-by-openais-gpt.html|Snap launches A.I. chatbot powered by OpenAI’s GPT.]] Rohan Goswami for CNBC, 2023-02-27. It's called "My AI" and available to Snapchat+ subscribers, currently $3.99/mo. (Business) 2023-02-27.b [[https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check|Keep your AI claims in check.]] Michael Atleson for the FTC, 2023-02-27. The FTC publishes business guidance on its blog reminding marketers to be careful about unsubstantiated claims about their AI products. (Business) 2023-02-27.c [[https://cognitiverevolution.substack.com/p/openais-foundry-leaked-pricing-says|OpenAI's Foundry leaked pricing says a lot – if you know how to read it.]] Erik Torenberg and Nathan Labenz, 2023-02-27. OpenAI Foundry prices and some hints about GPT4 capabilities (including a 4x larger context window) lead to lots of speculation about what applicaitons we'll see in the next year. "I'll personally bet that the 'robust fine-tuning' will drive most of the adoption, value, and transformation in the near term." "The new 32000-token context window is also a huge feature. This is enough for 50 pages of text or a 2-hour conversation. For many businesses, that's enough to contain your entire customer profile and history." (Business) 2023-02-28.a [[https://www.jailbreakchat.com/|Jailbreak Chat.]] A collection of ChatGPT prompt injections and exploits. (Research) 2023-02-28.b [[https://storage.googleapis.com/waymo-uploads/files/documents/safety/Safety%20Performance%20of%20Waymo%20RO%20at%201M%20miles.pdf|Safety Performance of the Waymo Rider-Only Automated Driving System at One Million Miles.]] Trent Victor et al. for Waymo, 2023. "...no reported injuries, and only two [major] collisions... an additional 18 minor-contact events..." Supports the hypothesis that Waymo is safer, but further research is needed. (Research) 2023-02-28.c [[https://huggingface.co/prompthero/openjourney|Openjourney.]] PromptHero. An open source Stable Diffusion fine tuned model on Midjourney images. (Tools) 2023-03-01.a [[https://openai.com/blog/introducing-chatgpt-and-whisper-apis|Introducing ChatGPT and Whisper APIs.]] OpenAI, 2023-03-01. OpenAI announces public access to their ChatGPT and Whisper APIs. Their announcement highlights early adopters: Snap "My AI," Quizlet "Q-Chat," Instacart search, Shopify shopping assistant, and the Speak language learning app. (Business, Applications) 2023-03-01.b [[https://www.jonstokes.com/p/chatgpt-explained-a-guide-for-normies|ChatGPT Explained: A Normie's Guide To How It Works.]] Jon Stokes, March 1st. Discourages anthropomorphizing models and discusses their mechanics in terms of probability functions. (Explainers) 2023-03-01.c [[https://techcrunch.com/2023/03/01/addressing-criticism-openai-will-no-longer-use-customer-data-to-train-its-models-by-default/|Addressing criticism, OpenAI will no longer use customer data to train its models by default.]] Details and comments on new policy changes. (Legal) 2023-03-02.a [[https://media.ford.com/content/fordmedia/fna/us/en/news/2023/03/02/ford-establishes-latitude-ai-to-develop-future-automated-driving.html|Ford Establishes Latitude AI to Develop Future Automated Driving Technology.]] Ford, 2023-03-02. A new subsidiary composed of 550 employees from Argo AI. (Business) 2023-03-03.a [[https://news.ycombinator.com/item?id=35007978|Facebook LLAMA is being openly distributed via torrents.]] Recently launched LLM (2023-02-24.c) previously available only to select researchers is now available on a public torrent, found via a link posted in a pull request against the official LLaMa repo. Described as a "leak" by HN. (Business) 2023-03-03.b [[https://www.reuters.com/technology/openais-long-time-backer-reid-hoffman-leaves-board-2023-03-03/|OpenAI's long-time backer Reid Hoffman leaves board.]] Reuters, March 3rd. He's stepping down "to avoid conflicts of interest as he backs more artificial intelligence companies," an indicator of how quickly the AI market has grown. (Business) 2023-03-05.a [[https://www.theverge.com/2023/3/5/23599209/companies-keep-up-chatgpt-ai-chatbots|Meet the companies trying to keep up with ChatGPT.]] Emma Roth for the Verge, March 5th. Survey of companies with chatbots: Microsoft, Google, Meta, Anthropic, You.com, Alibaba, Baidu, and a few smaller players. (Business) 2023-03-05.b [[https://archive.is/oG9k6|They thought loved ones were calling for help. It was an AI scam.]] Pranshu Verma for The Washington Post, March 5th. Tells an anecdote, thin on stats. Mentions ElevenLabs in particular. (Ethics) 2023-03-06.a [[https://puzzledpenguin.substack.com/p/chatgpt-invented-its-own-puzzle-game|ChatGPT invented its own puzzle game.]] Puzzled Penguin, March 6th. Using ChatGPT to design and implement a Sudoku-like puzzle (roughly based on parity) called [[https://sumplete.com/|Sumplete]]. It's pretty good! No rigorous proof here that it's original, but a great example of quickly inventing something with AI assistance. (Applications) 2023-03-06.b [[https://www.theverge.com/2023/3/6/23627228/microsoft-ai-future-of-work-event-date|Microsoft to detail the ‘future of work with AI’ during March 16th event.]] Tom Warren for the Verge, March 6th. (Business) 2023-03-07.a [[https://stability.ai/blog/stability-ai-acquires-init-ml-makers-of-clipdrop-application|Stability AI Acquires Init ML, Makers of Clipdrop Application.]] Stability AI blog. Clipdrop is a suite of AI-powered image editing tools. (Business) 2023-03-07.b [[https://archive.is/NP7BO|Silicon Valley’s Obsession With Killer Rogue AI Helps Bury Bad Behavior.]] Ellen Huet for Bloomberg, March 7th. Explores connections between AI research, Silicon Valley money, effective altruism, rationalism and LessWrong. Reports that a pocket of extreme ideologies and "cult-like" community emerged, covering - among other things - casual misogyny and abuse. Closes with a harsh comparison of the AI community to an overfit model. CW: Discussion of sexual assault. (Ethics) 2023-03-07.c [[https://www.darkreading.com/risk/employees-feeding-sensitive-business-data-chatgpt-raising-security-fears|Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears.]] Robert Lemos for Dark Reading, March 7th. One security firm reports 4.2% of the 1.6 million workers it monitors have tried to send confidential or regulated information to ChatGPT. Suggests most employers will soon need official policies about use of AI services in the workplace. Fails to mention OpenAI's recent relevant policy change (2023-03-01.c). (Legal) 2023-03-07.d [[https://www.reuters.com/technology/salesforce-add-chatgpt-slack-part-openai-partnership-2023-03-07/|Salesforce to add ChatGPT to Slack as part of OpenAI partnership.]] Reuters, March 7th. Called "EinsteinGPT," it's supposedly a proprietary AI model combined with ChatGPT that will be available in Slack and other Salesforce tools. (Business) 2023-03-07.e [[https://techcrunch.com/2023/03/07/d-ids-new-web-app-gives-a-face-and-voice-to-openais-chatgpt/|D-ID’s new web app gives a face and voice to OpenAI’s ChatGPT.]] Aisha Malik for TechCrunch, March 7th. [[https://www.d-id.com/|D-ID]] launches a beta product that combines ChatGPT with speech-to-text, text-to-speech and animated avatars to build a naturalistic conversation experience. (Although the demo video still has uncanny valley vibes.) (Business) 2023-03-07.f [[https://www.yahoo.com/lifestyle/openai-tech-rapidly-being-added-202834974.html|OpenAI’s tech is rapidly being added to a new type of software that could upend how law is practiced and paid for, and how young lawyers learn the ropes.]] Jeremy Kahn for Fortune, March 7th. Covers a product called "CoCounsel" by Casetext, which is a kind of legal search engine, and a new tool from a startup named Harvey that drafts contracts and client memos. (Business) 2023-03-07.g [[https://www.prnewswire.com/news-releases/brex-brings-groundbreaking-tools-to-finance-teams-with-openai-301764592.html|Brex Brings Groundbreaking Tools to Finance Teams With OpenAI.]] Brex, March 7th. The Brex Empower platform uses OpenAI-powered tech to provide insight on corporate spend and answer business questions in real time. (Business) 2023-03-08.a [[https://writeout.ai/|Writeout.ai - Transcribe and translate any audio file.]] A free lightweight transcription tool using OpenAI Whisper (2023-03-01.a). (Tools) 2023-03-08.b [[https://techmonitor.ai/technology/ai-and-automation/openai-challenged-to-enter-chatgpt-into-new-ai-regulatory-sandbox|OpenAI challenged to enter ChatGPT into new AI regulatory sandbox.]] Ryan Morrison for Tech Monitor, March 8th. A group called [[https://forhumanity.center/|ForHumanity]] asks OpenAI to join a new [[https://spn.org/articles/what-is-a-regulatory-sandbox/|regulatory sandbox.]] (Ethics) 2023-03-08.c [[https://www.theinformation.com/articles/openai-rival-anthropic-raises-funding-at-4-1-billion-valuation|OpenAI Rival Anthropic Raises Funding at $4.1 Billion Valuation.]] Kate Clark for The Information, March 8th. Spark Capital invests $300mil, Google previously invested $400mil. (Business) 2023-03-09.a [[https://discord.com/blog/ai-on-discord-your-place-for-ai-with-friends|Discord is Your Place for AI With Friends.]] Anjney Midha for Discord, March 9th. Discord announces a number of upcoming AI features, and an AI incubator for developers that want to build AI on Discord. (Business) 2023-03-09.b [[https://azure.microsoft.com/en-us/blog/chatgpt-is-now-available-in-azure-openai-service/|ChatGPT is now available in Azure OpenAI Service.]] Eric Boyd, Microsoft, March 9th. The popular model is available through Microsoft's cloud platform for $0.002/1k tokens. (Business) **2023-03-10.a [[https://github.com/ggerganov/llama.cpp|llama.cpp.]] Georgi Gerganov on GitHub, March 10th.** Releases a fast local implementation of inference using Meta's LLaMA model. (Tools) 2023-03-11.a [[https://twitter.com/miolini/status/1634982361757790209|LLaMA on RPi.]] Artem Andreenko on Twitter, March 11th. Reports running the LLaMA 7B model on a 4GB Raspberry Pi 4 at about 10sec/token. (Research) 2023-03-11.b [[https://simonwillison.net/2023/Mar/11/llama/|Large language models are having their Stable Diffusion moment.]] Simon Willison, March 11th. Explains how LLaMA (2023-02-24.c) and llama.cpp (2023-03-10.a) hit the sweet spot for an explosion of experimentation. (Explainers) 2023-03-12.a [[https://cocktailpeanut.github.io/dalai/#/|Dalai.]] An NPM package makes it very easy to run LLaMA locally on a PC. (Tools) **2023-03-13.a [[https://crfm.stanford.edu/2023/03/13/alpaca.html|Alpaca: A Strong Open-Source Instruction-Following Model.]] Rohan Taori et al., Stanford University, March 13th.** Introduces Alpaca 7B, a LLaMA 7B model fine-tuned for instruction-following, producing text-davinci-003-like behavior for under $600. (Research) 2023-03-13.b [[https://arxiv.org/abs/2303.06865|High-throughput Generative Inference of Large Language Models with a Single GPU.]] Ying Sheng et al., Cornell University, March 13th. Presents "FlexGen" for running LLMs with limited GPU memory. For example, it reaches 1 token/second running a 175B parameter model on a single 16GB GPU. (Research) 2023-03-13.c [[https://simonwillison.net/2023/Mar/13/alpaca/|Stanford Alpaca, and the acceleration of on-device large language model development.]] Simon Willison, March 13th. Discusses the release of Alpaca (2023-03-13.a) and the implications of affordable fine-tuning and running models on consumer hardware. (Explainers) 2023-03-13.d [[https://techcrunch.com/2023/03/13/microsoft-lays-off-an-ethical-ai-team-as-it-doubles-down-on-openai/|Microsoft lays off an ethical AI team as it doubles down on OpenAI.]] Rebecca Bellan for TechCrunch, March 13th. Microsoft lays off the small (~7 people) "ethics and society" team as part of its recent round of layoffs affecting 10k employees. They still maintain an "Office of Responsible AI" but the article questions its effectiveness. (Ethics) 2023-03-13.e [[https://github.com/openai/evals|Evals.]] OpenAI on GitHub, March 13th. OpenAI open-sources a framework and registry of benchmarks for evaluating large language models. (Tools) 2023-03-14.a [[https://twitter.com/ESYudkowsky/status/1635577836525469697?cxt=HHwWgoDSyY3Y3rItAAAA|Eliezer Yudkowsky on Alpaca.]] On Twitter, March 14th. Points out that it's a "big deal" that Stanford used OpenAI's closed-source text-davinci-003 to train their own open-source model into similar behavior. (Referring to 2023-03-13.a) (Business) 2023-03-14.b [[https://blog.google/technology/ai/ai-developers-google-cloud-workspace/|The next generation of AI for developers and Google Workspace.]] Thomas Kurian for Google, March 14th. Google Cloud introduces PaLM API (access to LLMs in their cloud) and MakerSuite (a prototyping tool), along with a handful of other gen-AI based Cloud features. Also announces a closed beta for a "Help me write" feature in Gmail and Docs. (Business) 2023-03-14.c [[https://openai.com/research/gpt-4|GPT-4.]] OpenAI, March 14th. The new model is "multimodal," meaning it can accept image inputs along with text. It scores in the top 10% on a simulated bar exam. (Research) 2023-03-14.d [[https://twitter.com/ggerganov/status/1635605532726681600|LLaMA on Pixel 5.]] Georgi Gerganov on Twitter, March 14th. Reports running LLaMA 7B on Pixel 5 at 1 token/second. (Research) 2023-03-14.e [[https://openai.com/customer-stories/khan-academy|Khan Academy integrates GPT-4 as every student’s customized tutor.]] OpenAI, March 14th. Announces "Khanmigo," an AI-powered assistant that functions as both a virtual tutor for students and a classroom assistant for teachers. ([[https://www.khanacademy.org/khan-labs|Khan Academy's own page.]]) (Applications) 2023-03-14.f [[https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAI%E2%80%99s-GPT-4|Confirmed: the new Bing runs on OpenAI’s GPT-4.]] Yusuf Mehdi, Microsoft, March 14th. The new Bing has already been using GPT-4 for about five weeks. (Business) 2023-03-14.g [[https://blog.duolingo.com/duolingo-max/|Introducing Duolingo Max, a learning experience powered by GPT-4.]] Duolingo, March 14th. Announces a new subscription tier giving access to "Explain My Answer" and "Roleplay" features powered by GPT-4. They new features are currently limited to Spanish and French. The post emphasizes user feedback features as well. (Applications) 2023-03-14.h [[https://www.anthropic.com/index/introducing-claude|Introducing Claude.]] Anthropic, March 14th. Limited public release of Claude and Claude Instant, competitors to ChatGPT. (Research, Business) 2023-03-15.a [[https://github.com/setzer22/llama-rs|LLaMA-rs.]] A Rust port of llama.cpp (2023-03-10.a). (Tools) 2023-03-15.b [[https://archive.ph/sp54w|A battle royal is brewing over copyright and AI.]] The Economist, March 15th. Compares generative AI to the rise and fall of Napster, effectively predicting that creative industries will wield copyright law to crush the new technology. Caution: Technical language is fast and loose in this one, describing AIs as "mining databases." (Legal) 2023-03-15.c [[https://cdn.openai.com/papers/gpt-4-system-card.pdf|GPT-4 System Card.]] OpenAI, March 15th. Describes capabilities, limitations, risks and mitigations of GPT-4. (Research) 2023-03-15.d [[https://gizmodo.com/ai-midjourney-free-ai-art-launches-magazine-1850229973|AI Eats Media: Midjourney Launches a Magazine.]] Blake Montgomery for Gizmodo, March 15th. Midjourney launches a [[https://mag.midjourney.com/|monthly print magazine called "Midjourney"]] for $4/month. (Business) 2023-03-15.e [[https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/|Prompt Engineering.]] Lilian Weng, March 15th. A thorough survey of prompt engineering techniques and resources. (Explainers) 2023-03-16.a [[https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/|Introducing Microsoft 365 Copilot – your copilot for work.]] Jared Spataro, Microsoft, March 16th. Microsoft announces a number of LLM tools integrated into its office suite. (Business) 2023-03-16.b [[https://github.com/antimatter15/alpaca.cpp|Alpaca.cpp.]] Similar to llama.cpp. (Tools) 2023-03-16.c [[https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence|Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence.]] Copyright Office, Library of Congress, March 16th. When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application. (Legal) 2023-03-16.d [[https://hai.stanford.edu/news/was-written-human-or-ai-tsu|Was this written by a human or AI? ¯\_(ツ)_/¯]] Prabha Kannan, Stanford, March 16th. A Stanford team's research suggests humans can only identify text written by an AI about 50% of the time - and when we're wrong, we're often wrong for the same reasons. (Research, Ethics) 2023-03-17.a [[https://arxiv.org/abs/2303.09752|CoLT5: Faster Long-Range Transformers with Conditional Computation.]] Joshua Ainslie et al., Google Research, March 17th. Proposes a transformer model that can make use of extremely long inputs, up to 64k tokens. (Research) 2023-03-17.b [[https://openai.com/research/gpts-are-gpts|GPTs are GPTs: An early look at the labor market impact potential of large language models.]] Tyna Eloundou et al., OpenAI, March 17th. Estimates that 19% of workers may see 50% of their tasks impacted by the new technology. (Research) 2023-03-19.a [[https://archive.ph/KRvrq|OpenAI GPT-4 users win followers by sharing how they’re using it—including to start businesses in ‘HustleGPT challenge.']] Steve Mollman for Fortune, March 19th. A hashtag #HustleGPT takes off for people using GPT-4 to generate business ideas. (Business) 2023-03-20.a [[https://archive.ph/83vKn|Nearly Half of Firms Are Drafting Policies on ChatGPT Use.]] Jo Constantz for Bloomberg, March 20th. A recent Gartner poll explores new policies business are deploying in response to generative AI. Results suggest a wide-range of responses (and non-responses) in the market. (Business) 2023-03-20.b [[https://petals.ml/|Petals - Run 100B+ language models at home, BitTorrent‑style.]] A technology for running and fine-tuning large LLMs (e.g. 176B parameter models) on a torrent-like "swarm" of independent clients. [[https://arxiv.org/abs/2209.01188|Associated paper.]] (Tools) 2023-03-20.c [[https://news.ycombinator.com/item?id=35242069|OpenAI to discontinue support for the Codex API.]] Hacker News. On March 23rd the Codex API (code completions) shuts down. Customers are encouraged to move to GPT-3.5-Turbo. (Business) 2023-03-20.d [[https://www.theverge.com/2023/3/20/23648113/text-to-video-generative-ai-runway-ml-gen-2-model-access|Text-to-video AI inches closer as startup Runway announces new model.]] James Vincent, The Verge, March 20th. Runway announces "Gen-2" text-to-video model. Article shows some samples and links to past text-to-video work by Meta and Google. (Business) 2023-03-20.e [[https://app.roll20.net/forum/post/11379379/ai-generated-artwork-policy-updates|AI-Generated Artwork Policy Updates.]] Dean Bigbee, Roll20, March 20th. The Roll20 marketplace will not accept any product that utilizes AI-generated art. DriveThru marketplaces ban AI-generated "standalone artwork products" and require publishers to tag products that use AI-generated artwork. (Business) 2023-03-20.f [[https://openai.com/blog/march-20-chatgpt-outage|March 20 ChatGPT outage: Here’s what happened.]] OpenAI, March 20th. Brief postmortem of a bug in ChatGPT exposing users' personal data, including payment data, in some cases. Lots of media characterizes this as a "leak" and it's later cited in actions by data protection and privacy agencies. (Legal) 2023-03-21.a [[https://apnews.com/article/tiktok-china-cybersecurity-data-privacy-595f9ae7c0a1fc22f0b285cede6bd67c|TikTok bans deepfakes of young people as it updates guidelines.]] Kelvin Chan for the Associated Press, March 21st. TikTok clarifies its policy, saying all deepfakes or realistic manipulated content must be labeled to indicate they’re fake or altered. Deepfakes of private figures and young people are not allowed. Deepfakes of public figures are OK in certain contexts, such as for artistic or educational content, but not for political or commercial endorsements. (Business) 2023-03-21.b [[https://blog.google/technology/ai/try-bard/|Try Bard and share your feedback.]] Sissie Hsiao and Eli Collins, Google, March 21st. Google Bard enters closed beta. (Business) 2023-03-21.c [[https://virtualface.app/|Virtual Face.]] Synthetic professional headshots in under 30 minutes for about $10. (Applications) 2023-03-21.d [[https://www.watermelontools.com/|Watermelon: AI-Powered Code Archeology Toolbox.]] (Applications) 2023-03-21.e [[https://www.adobe.com/sensei/generative-ai/firefly.html|Adobe Firefly.]] AI image generator - actually seems to be a whole suite of generative AI tools. (Applications) 2023-03-21.f [[https://blogs.microsoft.com/blog/2023/03/21/create-images-with-your-words-bing-image-creator-comes-to-the-new-bing/|Bing image creator.]] (Applications) 2023-03-21.g [[https://azure.microsoft.com/en-us/blog/introducing-gpt4-in-azure-openai-service/|Introducing GPT-4 in Azure OpenAI Service.]] Eric Boyd, Microsoft, March 21st. Quite a fast follow on 2023-03-09.b. (Business) 2023-03-21.h [[https://archive.ph/iUhOB|New AI Startup Accelerator Will Partner With OpenAI, Microsoft.]] Dina Bass, Bloomberg, March 21st. Accelerator "Neo" founded by Ali Partovi offers AI API credits to its startups. (Business) 2023-03-21.i [[http://neil-clarke.com/submissions-update/|Submissions Update.]] Neil Clarke, March 21st. //Clarkesworld// magazine, which closed submissions in February due to a flood of generated content (2023-02-24.b), reports on reopening submissions and still seeing a large percentage of spam. (Business) 2023-03-21.j [[https://www.gatesnotes.com/The-Age-of-AI-Has-Begun|The Age of AI has begun.]] Bill Gates, March 21st. Essay exploring ways AI could help solve critical global problems. //"...market forces won’t naturally produce AI products and services that help the poorest. The opposite is more likely. With reliable funding and the right policies, governments and philanthropy can ensure that AIs are used to reduce inequity."// (Explainers) 2023-03-22.a [[https://blog.mozilla.org/en/mozilla/introducing-mozilla-ai-investing-in-trustworthy-ai/|Introducing Mozilla.ai: Investing in Trustworthy AI.]] Mark Surman, Mozilla, March 22nd. Founded with $30M, led by Moez Draief, initial vision "make it easy to develop trustworthy AI products." (Business, Ethics) 2023-03-22.b [[https://github.blog/2023-03-22-github-copilot-x-the-ai-powered-developer-experience/|GitHub Copilot X: The AI-powered developer experience.]] Thomas Dohmke, GitHub, March 22nd. Copilot upgrades to GPT-4, announces chat and voice interfaces and will soon interact with pull requests, docs, and the CLI. (Business, Applications) 2023-03-22.c [[https://arxiv.org/abs/2303.12712|Sparks of Artificial General Intelligence: Early experiments with GPT-4.]] Sébastien Bubeck et al., Microsoft Research, March 22nd. Proposes that GPT-4 "could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system" and explains their reasoning with examples. (Research) 2023-03-22.d [[https://archive.ph/aSFxw|OpenAI tech gives Microsoft's Bing a boost in search battle with Google.]] Akash Sriram and Chavi Mehta for Reuters, March 22nd. Page visits on Bing have risen 15.8% between Feb 7 and March 20; Google visits have declined about 1% in the same period. (Business) 2023-03-22.e [[https://twitter.com/DV2559106965076/status/1638769434763608064|Hidden information in the GPT-4 paper.]] An anonymous Twitter user claims to have discovered commented-out sections in the recent paper's (2023-03-22.c) LaTeX source code. The removed content appears to include two sections on toxicity. ([[https://mem.ai/p/Gw4E9TbVgN0aP35S8hBo|saved thread]]) (Ethics) **2023-03-23.a [[https://openai.com/blog/chatgpt-plugins|ChatGPT plugins.]] OpenAI, March 23rd.** Speaking of tool use, OpenAI adds a plugin ecosystem which are effectively tools the LLM can call into, turning it into a smart assistant. The demo video shows looking up a recipe and ordering ingredients on Instacart. Here's a [[https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/|corresponding blog post]] about the Wolfram Alpha plugin. There are only a dozen plugins available at launch but someone [[https://twitter.com/rez0__/status/1639259413553750021|found evidence of more.]] (Business) 2023-03-23.b [[https://about.sourcegraph.com/blog/cheating-is-all-you-need|Cheating is All You Need.]] Steve Yegge, Sourcegraph blog, March 23rd. Explains why the author thinks LLMs are as big a deal as the internet itself. Proposes "the winners in the AI space will have data moats" because effectiveness is more about what you feed into the context window than anything else. Then introduces "Cody" the Sourcegraph coding assistant. (Business) 2023-03-23.c [[https://rodneybrooks.com/what-will-transformers-transform/|What Will Transformers Transform?]] Rodney Brooks, March 23rd. An anti-hype take. Successful deployment of AI systems will always have a person in the loop for a long time. Makes some specific (but not too specific) predictions about the next seven years. (Explainers) 2023-03-23.d [[https://twitter.com/theshawwn/status/1638925249709240322|Facebook is aggressively going after LLaMA repos with DMCAs.]] Shawn Presser on Twitter, March 23rd. Meta tries to control its leaked LLM (2023-02-24.c). Creator of dalai (2023-03-12.a) launches a decentralized model-sharing platform called [[https://ipfs.io/ipfs/QmYyucgBQVfs9JXZ2MtmkGPAhgUjNgyGE6rcJT1KybQHhp/index.html|GOAT]] in response. (Business) 2023-03-23.e [[https://andrewmayneblog.wordpress.com/2023/03/23/chatgpt-code-interpreter-magic/|ChatGPT + Code Interpreter = Magic.]] Andrew Mayne, March 23rd. More breathless excitement about the plugins announcement (2023-03-23.a), with lots of examples of new capabilities. (Explainers) 2023-03-24.a [[https://grady.io/post-gpt-computing/|Post-GPT Computing.]] Grady Simon, March 24th. An early shocked response (one of many) to the effectiveness of OpenAI plugins (2023-03-23.a). (Business) 2023-03-24.b [[https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html|Hello Dolly: Democratizing the magic of ChatGPT with open models.]] Mike Conover et al. for the Databricks company blog, March 24th. Debuts "Dolly," as 6 billion parameter instruction-following LLM, along with instructions for affordably recreating it by fine-tuning the `gpt-j-6B` model from May 2021. (Business) 2023-03-25.a [[https://www.atmosera.com/ai/understanding-chatgpt/|Undestanding ChatGPT.]] Jeff Prosise. A technical explainer of the history of LLMs from Transformers through BERT and GPT-3. (Explainers) 2023-03-25.b [[https://oneusefulthing.substack.com/p/superhuman-what-can-ai-do-in-30-minutes|Superhuman: What can AI do in 30 minutes?.]] Ethan Mollick, March 25th. Explores productivity potential of genAI for knowledge work through a few anecdotal experiments. (Applications) 2023-03-27.a [[https://stratechery.com/2023/chatgpt-learns-computing/|ChatGPT Gets a Computer.]] Ben Thompson, Stratechery, March 27th. Reflections on OpenAI's introduction of plugins (2023-03-23.a), how it aligns with the theory that LLMs are fundamentally different and more "human" according to a certain interpretation of intelligence, and how this collision of conventional computing and inference more-or-less solves the hallucination problem. (Business) 2023-03-27.b [[https://marginalrevolution.com/marginalrevolution/2023/03/existential-risk-and-the-turn-in-human-history.html|Existential risk, AI, and the inevitable turn in human history.]] Tyler Cowen, Marginal Revolution, March 27th. "Virtually all of us have been living in a bubble 'outside of history.' [...] Hardly anyone [...] is prepared to live in actual 'moving' history. [...] AI is very likely to overturn most of our apple carts." (Ethics) 2023-03-27.c [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|Pause Giant AI Experiments: An Open Letter.]] Future of Life Institute. An open letter framing AI as an existential threat and calling for a 6-month pause on training of systems more powerful than GPT-4, to facilitate establishment of standardized safety protocols and independent oversight. The letter has a number of public figure signatories including Elon Musk, Steve Wozniak, Andrew Yang and Max Tegmark. Future of Life Institute is primarily funded by the Musk foundation. (Ethics) 2023-03-27.d [[https://simonwillison.net/2023/Mar/27/ai-enhanced-development/|AI-enhanced development makes me more ambitious with my projects.]] Simon Willison, March 27th. Demos using ChatGPT to build a system to capture and archive ChatGPT conversations, and ends with a short list of other projects where ChatGPT has been useful. (Explainers) 2023-03-28.a [[https://www.cerebras.net/blog/cerebras-gpt-a-family-of-open-compute-efficient-large-language-models/|Cerebras-GPT: A Family of Open, Compute-efficient, Large Language Models.]] Nolan Dey et al., Cerebras, March 28th. Announces seven open GPT models ranging from 111 million to 13 billion parameters. Supposedly has lower training costs, and consumes less energy than any publicly available model to date. (Research) 2023-03-28.b [[https://www.theverge.com/2023/3/28/23660101/ai-competition-ftc-doj-lina-khan-jonathan-kanter-antitrust-summit|The US government is gearing up for an AI antitrust fight.]] Adi Robertson for The Verge, March 28th. The FTC and DOJ comment that general disruption of the industry, combined with the AI-specific advantage conferred on large companies by their access to compute, storage, and training data, produce an environment where anticompetitive tactics are likely to emerge. (Legal) 2023-03-28.c [[https://archive.is/O2wRB|Why You Fell for the Fake Pope Coat.]] Charlie Warzel, The Atlantic, March 28th. Compares viral Midjourney images of the Trump arrest to the Pope puffer coat, pointing out that the latter was more believable precisely because it was low-stakes, and uses this as an illustration of the more effective sorts of misinformation we're likely to see in the near future. (Ethics) 2023-03-28.d [[https://news.microsoft.com/2023/03/28/with-security-copilot-microsoft-brings-the-power-of-ai-to-cyberdefense/|With Security Copilot, Microsoft brings the power of AI to cyberdefense.]] Microsoft, March 28th. The [[https://news.microsoft.com/ai-security-2023/|new product]] is in private preview. It's trained on cybersecurity threat intelligence and expertise and will receive ongoing updates. It gives security teams easy access to knowledge of the latest threats, and integrates with existing security tools to help "see through the noise of web traffic." (Applications) 2023-03-29.a [[https://aisnakeoil.substack.com/p/a-misleading-open-letter-about-sci|A misleading open letter about sci-fi AI dangers ignores the real risks.]] A couple of researchers at Princeton, argue that 2023-03-27.c focuses on the wrong risks of AI, and offers an alternative set of concrete near-term risks they think we should be addressing. (Ethics) 2023-03-29.c [[https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach|AI regulation: a pro-innovation approach.]] UK Department for Science, Innovation and Technology and Office for Artificial Intelligence, March 29th. Official UK policy proposing a relatively hands-off approach to regulation, prioritizing innovation over control. (Regulation) 2023-03-30.a [[https://www.caidp.org/cases/openai/|CAIDP FTC Complaint, In the matter of Open AI, March 30, 2023.]] Center for AI and Digital Policy. An FTC complaint claiming that OpenAI violated Section 5 of the FTC Act by releasing GPT-4, a product "that is biased, deceptive, and a risk to privacy and public safety." Mark Rotenberg, president of the CAIDP, is a signatory on 2023-03-27.c. The complaint quotes OpenAI's own acknowledgement of risks extensively. It appeals to the OECD AI Principles (2019) and the UGAI framework (2018) as relevant public policy norms for the governance of AI, and cites the open letter above as a signal that the FTC urgently needs to act. It also quotes the Stochastic Parrots paper (2021-03-01.a) and a number of other researchers on the need to slow down. (Ethics) 2023-03-30.b [[https://www.theverge.com/2023/3/30/23662940/deepfake-viral-ai-misinformation-midjourney-stops-free-trials|AI image generator Midjourney stops free trials but says influx of new users to blame.]] James Vincent for the Verge, March 30th. The company cites "massive amounts of people making throwaway accounts to get free images" likely in the wake of a viral how-to video, but other outlets note that the change follows some Midjourney images hitting mainstream news. Free trials may return in the future. (Business) 2023-03-30.c [[https://www.beuc.eu/press-releases/investigation-eu-authorities-needed-chatgpt-technology|Investigation by EU authorities needed into ChatGPT technology.]] The European Consumer Organization (BEUC) calls for EU and national authorities to investigate ChatGPT following the CAIDP complaint (2023-03-30.a). (Ethics) 2023-03-31.a [[https://fly.io/ruby-dispatch/pairing-with-gpt-4/|Pairing With GPT-4.]] Brad Gessler, Fly.io. Example case of using GPT-4 on a Ruby task. Takeaways: It's a useful tool, you still need to know what you're doing, it's great a jumping contexts, gives plausible answers that could be wrong, gets stuck when the context is too large. (Explainers) 2023-03-31.b [[https://www.politico.eu/article/italian-privacy-regulator-bans-chatgpt/|Italian privacy regulator bans ChatGPT.]] Clothilde Goujard for Politico, March 31st. The national data protection authority said it will immediately block and investigate OpenAI for a potential violation of GDPR. (Legal) 2023-03-31.c [[https://worldcoin.org/blog/engineering/humanness-in-the-age-of-ai|Humanness in the Age of AI.]] Worldcoin blog, March 31st. On the importance of proof-of-personhood (PoP) in a world where intelligence tests are increasingly insufficient to prove human-ness, and how an open identity protocol could work (promoting their own). (Explainers) 2023-03-31.d [[https://www.dair-institute.org/blog/letter-statement-March2023|Statement from the listed authors of Stochastic Parrots on the “AI pause” letter.]] Timnit Gebru et al., March 31. A response to 2023-03-27.c, denouncing the AI hype, anthropomorphizing and longtermism behind it, and instead recommending that regulators focus on "transparency, accountability and preventing exploitative labor practices." (Ethics) 2023-04-04.a [[https://archive.is/kCklr|We need a much more sophisticated debate about AI.]] Jamie Susskind, Financial Times, April 4th. A call for more nuanced debate about AI regulation. (Regulation) **2023-04-04.b [[https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index_Report_2023.pdf|Artificial Intelligence Index Report 2023.]] Stanford Institute for Human-Centered Artificial Intelligence, March 4th.** Stanford releases its annual 300+ page report on the state of AI. There is a //lot// here, but key takeaways include: Misuse of AI is rapidly rising; organizations that have adopted AI report realizing meaningful cost decreases and revenue increases; Only 35% of Americans agreed that products and services using AI had more benefits than drawbacks. (Explainers) 2023-04-04.c [[https://mitchellh.com/writing/ai-through-a-cloud-lens|Growth of AI Through a Cloud Lens.]] Mitchell Hashimoto, April 4th. Compares the rise of AI to the rise of cloud, looking for similarities that might predict a platform shift; finds them. (Business) 2023-04-04.d [[https://www.priv.gc.ca/en/opc-news/news-and-announcements/2023/an_230404/|OPC launches investigation into ChatGPT.]] Office of the Privacy Commissioner of Canada, April 4th. The investigation was launched in response to a complaint alleging the collection, use and disclosure of personal information without consent. (Legal) 2023-04-04.e [[https://confusedbit.dev/posts/how_does_gpt_work/|Simply explained: how does GPT work?]] March 4th. (Explainers) 2023-04-05.a [[https://ai.facebook.com/blog/segment-anything-foundation-model-image-segmentation/|Introducing Segment Anything: Working toward the first foundation model for image segmentation.]] Meta AI, April 5th. A foundational model for separating objects within an image. (Research) 2023-04-05.b [[https://robotic.substack.com/p/behind-the-curtain-ai|Behind the curtain: what it feels like to work in AI right now.]] Nathan Lambert, Apr 5. An ML scientist at HuggingFace discusses the degree to which the ChatGPT moment has shaken up the industry. (Explainers) 2023-04-06.a [[https://apnews.com/article/chatgpt-openai-data-privacy-italy-1e3f070ca86ec234cae4d08ac8443879|OpenAI to offer remedies to resolve Italy’s ChatGPT ban.]] Kelvin Chan, Associated Press, April 6th. In a video call OpenAI executives promised to set out measures to address the concerns (2023-03-31.b). (Legal) 2023-04-06.b [[https://jonathanturley.org/2023/04/06/defamed-by-chatgpt-my-own-bizarre-experience-with-artificiality-of-artificial-intelligence/|Defamed by ChatGPT: My Own Bizarre Experience with Artificiality of “Artificial Intelligence”.]] Jonathan Turley, Apr 6. The author (a law professor) shares his experience after ChatGPT fabricated a news story that accused the author of sexually harassing one of his students. (Ethics) 2023-04-06.c [[https://www.reddit.com/r/ChatGPT/comments/12diapw/gpt4_week_3_chatbots_are_yesterdays_news_ai/|GPT-4 Week 3. Chatbots are yesterdays news. AI Agents are the future. The beginning of the proto-agi era is here.]] Linkdump of notable applications of bleeding-edge LLMs. (Explode here) 2023-04-10.a [[https://archive.is/upxPn|OpenAI CEO Plans Japan Expansion After Prime Minister Meeting.]] Yuki Hagiwara, Bloomberg, April 10. (Business) 2023-04-11.a [[https://huyenchip.com/2023/04/11/llm-engineering.html|Building LLM applications for production.]] Chip Huyen, Apr 11. (Explainers) 2023-04-12.a [[https://github.com/databrickslabs/dolly/tree/master/data|databricks-dolly-15k is an open source dataset of instruction-following records.]] Databricks. With a permissive creative commons license. 2023-04-12.b [[https://txt.cohere.ai/what-are-transformer-models/|What Are Transformer Models and How Do They Work?]] Luis Serrano, Cohere, Apr 12. (Explainers) 2023-04-16.a [[https://magazine.sebastianraschka.com/p/understanding-large-language-models|Understanding Large Language Models: A Cross-Section of the Most Relevant Literature To Get Up to Speed.]] Sebastian Raschka, Apr 16. (Explainers) 2023-04-17.a [[https://www.bbc.com/news/entertainment-arts-65298834|AI-generated Drake and The Weeknd song goes viral.]] Mark Savage, BBC News, Apr 17. A song [[https://www.youtube.com/watch?v=VszJPLAtK0U|"Heart On My Sleeve"]] using the cloned voices of Drake and The Weeknd is viewed 8.5 million times over the weekend. 2023-04-17.b [[https://www.together.xyz/blog/redpajama|RedPajama, a project to create leading open-source models, starts by reproducing LLaMA training dataset of over 1.2 trillion tokens.]] (Research) 2023-04-17.c [[https://minigpt-4.github.io/|MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models.]] (Research) 2023-04-17.d [[https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/|OpenAI’s CEO Says the Age of Giant AI Models Is Already Over.]] Will Knight, Wired, Apr 17. 2023-04-19.a [[https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/|Inside the secret list of websites that make AI like ChatGPT sound smart.]] Washington Post, Apr 19. 2023-07-06.a [[https://openai.com/blog/gpt-4-api-general-availability|GPT-4 is Generally Available.]]