Commonplace

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ai [2023/04/04 14:25] – Stanford report bradai [2025/03/18 08:56] (current) brad
Line 3: Line 3:
 > //We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. - Roy Amara// > //We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. - Roy Amara//
  
-Towards an understanding of current issues.  Most of the content here is about generative AI (large language models and diffusion models) which is the hot topic as of February 2023, but there's a scattering of broader AI history and issues as well. [[#end_of_page|Jump to end.]]+Most of the content here is about generative AI (large language models and diffusion models) which is the hot topic as of February 2023, but there's a scattering of broader AI history and issues as well. [[#end_of_page|Jump to end.]]
  
 Caveats: Caveats:
 +  * **Mostly stopped updating in April 2023, keeping up got exhausting.** There are better sources out there for current developments. Consider this a static account of the late 2022/early 2023 hype.
   * I am not a machine learning expert nor do I work in the field. This page is a scratchpad, reflecting my current efforts to catch up in my understanding of the tech and surrounding issues.   * I am not a machine learning expert nor do I work in the field. This page is a scratchpad, reflecting my current efforts to catch up in my understanding of the tech and surrounding issues.
   * All comments and opinions are my own, and are not intended to reflect the positions of my employer or anyone affiliated with me.   * All comments and opinions are my own, and are not intended to reflect the positions of my employer or anyone affiliated with me.
Line 139: Line 140:
 [[#ref2010|2010]], [[#ref2015|2015]], [[#ref2016|2016]], [[#ref2017|2017]], [[#ref2018|2018]], [[#ref2019|2019]], [[#ref2020|2020]], [[#ref2021|2021]]\\ [[#ref2010|2010]], [[#ref2015|2015]], [[#ref2016|2016]], [[#ref2017|2017]], [[#ref2018|2018]], [[#ref2019|2019]], [[#ref2020|2020]], [[#ref2021|2021]]\\
 2022: [[#ref2022_01|Jan]], [[#ref2022_02|Feb]], [[#ref2022_03|Mar]], [[#ref2022_04|Apr]], [[#ref2022_05|May]], [[#ref2022_06|Jun]], [[#ref2022_07|Jul]], [[#ref2022_08|Aug]], [[#ref2022_09|Sep]], [[#ref2022_10|Oct]], [[#ref2022_11|Nov]], [[#ref2022_12|Dec]]\\ 2022: [[#ref2022_01|Jan]], [[#ref2022_02|Feb]], [[#ref2022_03|Mar]], [[#ref2022_04|Apr]], [[#ref2022_05|May]], [[#ref2022_06|Jun]], [[#ref2022_07|Jul]], [[#ref2022_08|Aug]], [[#ref2022_09|Sep]], [[#ref2022_10|Oct]], [[#ref2022_11|Nov]], [[#ref2022_12|Dec]]\\
-2023: [[#ref2023_01|Jan]], [[#ref2023_02|Feb]], [[#ref2023_03|Mar]], [[#ref2023_04|Apr]]+2023: [[#ref2023_01|Jan]], [[#ref2023_02|Feb]], [[#ref2023_03|Mar]], [[#ref2023_04|Apr]], [[#ref2023_07|Jul]]
  
 <BOOKMARK:ref2010> <BOOKMARK:ref2010>
Line 624: Line 625:
  
 2023-03-29.a [[https://aisnakeoil.substack.com/p/a-misleading-open-letter-about-sci|A misleading open letter about sci-fi AI dangers ignores the real risks.]] A couple of researchers at Princeton, argue that 2023-03-27.c focuses on the wrong risks of AI, and offers an alternative set of concrete near-term risks they think we should be addressing. (Ethics) 2023-03-29.a [[https://aisnakeoil.substack.com/p/a-misleading-open-letter-about-sci|A misleading open letter about sci-fi AI dangers ignores the real risks.]] A couple of researchers at Princeton, argue that 2023-03-27.c focuses on the wrong risks of AI, and offers an alternative set of concrete near-term risks they think we should be addressing. (Ethics)
 +
 +2023-03-29.c [[https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach|AI regulation: a pro-innovation approach.]] UK Department for Science, Innovation and Technology and Office for Artificial Intelligence, March 29th. Official UK policy proposing a relatively hands-off approach to regulation, prioritizing innovation over control. (Regulation)
  
 2023-03-30.a [[https://www.caidp.org/cases/openai/|CAIDP FTC Complaint, In the matter of Open AI, March 30, 2023.]] Center for AI and Digital Policy. An FTC complaint claiming that OpenAI violated Section 5 of the FTC Act by releasing GPT-4, a product "that is biased, deceptive, and a risk to privacy and public safety." Mark Rotenberg, president of the CAIDP, is a signatory on 2023-03-27.c. The complaint quotes OpenAI's own acknowledgement of risks extensively. It appeals to the OECD AI Principles (2019) and the UGAI framework (2018) as relevant public policy norms for the governance of AI, and cites the open letter above as a signal that the FTC urgently needs to act. It also quotes the Stochastic Parrots paper (2021-03-01.a) and a number of other researchers on the need to slow down. (Ethics) 2023-03-30.a [[https://www.caidp.org/cases/openai/|CAIDP FTC Complaint, In the matter of Open AI, March 30, 2023.]] Center for AI and Digital Policy. An FTC complaint claiming that OpenAI violated Section 5 of the FTC Act by releasing GPT-4, a product "that is biased, deceptive, and a risk to privacy and public safety." Mark Rotenberg, president of the CAIDP, is a signatory on 2023-03-27.c. The complaint quotes OpenAI's own acknowledgement of risks extensively. It appeals to the OECD AI Principles (2019) and the UGAI framework (2018) as relevant public policy norms for the governance of AI, and cites the open letter above as a signal that the FTC urgently needs to act. It also quotes the Stochastic Parrots paper (2021-03-01.a) and a number of other researchers on the need to slow down. (Ethics)
Line 644: Line 647:
 2023-04-04.a [[https://archive.is/kCklr|We need a much more sophisticated debate about AI.]] Jamie Susskind, Financial Times, April 4th. A call for more nuanced debate about AI regulation. (Regulation) 2023-04-04.a [[https://archive.is/kCklr|We need a much more sophisticated debate about AI.]] Jamie Susskind, Financial Times, April 4th. A call for more nuanced debate about AI regulation. (Regulation)
  
-2023-04-04.b [[https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index_Report_2023.pdf|Artificial Intelligence Index Report 2023.]] Stanford Institute for Human-Centered Artificial Intelligence, March 4th. Stanford releases its annual 300+ page report on the state of AI. There is a //lot// here, but key takeaways include: Misuse of AI is rapidly rising; organizations that have adopted AI report realizing meaningful cost decreases and revenue increases; Only 35% of Americans agreed that products and services using AI had more benefits than drawbacks. (Explainers)+**2023-04-04.b [[https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index_Report_2023.pdf|Artificial Intelligence Index Report 2023.]] Stanford Institute for Human-Centered Artificial Intelligence, March 4th.** Stanford releases its annual 300+ page report on the state of AI. There is a //lot// here, but key takeaways include: Misuse of AI is rapidly rising; organizations that have adopted AI report realizing meaningful cost decreases and revenue increases; Only 35% of Americans agreed that products and services using AI had more benefits than drawbacks. (Explainers) 
 + 
 +2023-04-04.c [[https://mitchellh.com/writing/ai-through-a-cloud-lens|Growth of AI Through a Cloud Lens.]] Mitchell Hashimoto, April 4th. Compares the rise of AI to the rise of cloud, looking for similarities that might predict a platform shift; finds them. (Business) 
 + 
 +2023-04-04.d [[https://www.priv.gc.ca/en/opc-news/news-and-announcements/2023/an_230404/|OPC launches investigation into ChatGPT.]] Office of the Privacy Commissioner of Canada, April 4th. The investigation was launched in response to a complaint alleging the collection, use and disclosure of personal information without consent. (Legal) 
 + 
 +2023-04-04.e [[https://confusedbit.dev/posts/how_does_gpt_work/|Simply explained: how does GPT work?]] March 4th. (Explainers) 
 + 
 +2023-04-05.a [[https://ai.facebook.com/blog/segment-anything-foundation-model-image-segmentation/|Introducing Segment Anything: Working toward the first foundation model for image segmentation.]] Meta AI, April 5th. A foundational model for separating objects within an image. (Research) 
 + 
 +2023-04-05.b [[https://robotic.substack.com/p/behind-the-curtain-ai|Behind the curtain: what it feels like to work in AI right now.]] Nathan Lambert, Apr 5. An ML scientist at HuggingFace discusses the degree to which the ChatGPT moment has shaken up the industry. (Explainers) 
 + 
 +2023-04-06.a [[https://apnews.com/article/chatgpt-openai-data-privacy-italy-1e3f070ca86ec234cae4d08ac8443879|OpenAI to offer remedies to resolve Italy’s ChatGPT ban.]] Kelvin Chan, Associated Press, April 6th. In a video call OpenAI executives promised to set out measures to address the concerns (2023-03-31.b). (Legal) 
 + 
 +2023-04-06.b [[https://jonathanturley.org/2023/04/06/defamed-by-chatgpt-my-own-bizarre-experience-with-artificiality-of-artificial-intelligence/|Defamed by ChatGPT: My Own Bizarre Experience with Artificiality of “Artificial Intelligence”.]] Jonathan Turley, Apr 6. The author (a law professor) shares his experience after ChatGPT fabricated a news story that accused the author of sexually harassing one of his students. (Ethics) 
 + 
 +2023-04-06.c [[https://www.reddit.com/r/ChatGPT/comments/12diapw/gpt4_week_3_chatbots_are_yesterdays_news_ai/|GPT-4 Week 3. Chatbots are yesterdays news. AI Agents are the future. The beginning of the proto-agi era is here.]] Linkdump of notable applications of bleeding-edge LLMs. (Explode here) 
 + 
 +2023-04-10.a [[https://archive.is/upxPn|OpenAI CEO Plans Japan Expansion After Prime Minister Meeting.]] Yuki Hagiwara, Bloomberg, April 10. (Business) 
 + 
 +2023-04-11.a [[https://huyenchip.com/2023/04/11/llm-engineering.html|Building LLM applications for production.]] Chip Huyen, Apr 11. (Explainers) 
 + 
 +2023-04-12.a [[https://github.com/databrickslabs/dolly/tree/master/data|databricks-dolly-15k is an open source dataset of instruction-following records.]] Databricks. With a permissive creative commons license. 
 + 
 +2023-04-12.b [[https://txt.cohere.ai/what-are-transformer-models/|What Are Transformer Models and How Do They Work?]] Luis Serrano, Cohere, Apr 12. (Explainers) 
 + 
 +2023-04-16.a [[https://magazine.sebastianraschka.com/p/understanding-large-language-models|Understanding Large Language Models: A Cross-Section of the Most Relevant Literature To Get Up to Speed.]] Sebastian Raschka, Apr 16. (Explainers) 
 + 
 +2023-04-17.a [[https://www.bbc.com/news/entertainment-arts-65298834|AI-generated Drake and The Weeknd song goes viral.]] Mark Savage, BBC News, Apr 17. A song [[https://www.youtube.com/watch?v=VszJPLAtK0U|"Heart On My Sleeve"]] using the cloned voices of Drake and The Weeknd is viewed 8.5 million times over the weekend. 
 + 
 +2023-04-17.b [[https://www.together.xyz/blog/redpajama|RedPajama, a project to create leading open-source models, starts by reproducing LLaMA training dataset of over 1.2 trillion tokens.]] (Research) 
 + 
 +2023-04-17.c [[https://minigpt-4.github.io/|MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models.]] (Research) 
 + 
 +2023-04-17.d [[https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/|OpenAI’s CEO Says the Age of Giant AI Models Is Already Over.]] Will Knight, Wired, Apr 17. 
 + 
 +2023-04-19.a [[https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/|Inside the secret list of websites that make AI like ChatGPT sound smart.]] Washington Post, Apr 19. 
 + 
 +<BOOKMARK:ref2023_07> 
 + 
 +2023-07-06.a [[https://openai.com/blog/gpt-4-api-general-availability|GPT-4 is Generally Available.]]
  
 <BOOKMARK:end_of_page> <BOOKMARK:end_of_page>