從 Fast.ai 看到這篇貼文: AI Safety and the Age of Dislightenment
摘要:
嚴格AI模型授權和監控建議可能無效或產生反效果,在捍衛社會和使社會自衛之間取得平衡是微妙的。我們應主張開放、謙卑和廣泛諮詢,制定更好回應,使其與原則和價值相一致,了解更多關於這潛力的技術的過程中不斷演進。
心得:
裡面論文都是這篇: FAR (Frontier AI Regulation: Managing Emerging Risks to Public Safety) https://arxiv.org/abs/2307.03718
加強GPAI監管只會增加進入者門檻,讓競爭者更少。
規範使用會比開發對於社會安全更有幫助。
開源是可以增加多樣性並且降低大型公司的危險性。
目前的進展正因為大眾的未知與沒有監管,所以應該要謹慎避免新科技恐慌帶來的不必要的限制。 參考:歐盟監管開源人工智能的嘗試適得其反
https://www.brookings.edu/articles/the-eus-attempt-to-regulate-open-source-ai-is-counterproductive/
Weblink: https://www.crowdcast.io/c/m2ooh0uy9syg
Speak1: Swyx - smol developer
https://docs.google.com/presentation/d/1d5N3YqjSJwhioFT-edmyjxGsPBCMb1uZg0Zs5Ju673k/edit#slide=id.g254e571859c_0_133
Article:
https://stratechery.com/2018/techs-two-philosophies/
https://www.latent.space/p/agents
https://lilianweng.github.io/posts/2023-06-23-agent/
Speaker2: Alex - Agent Eval
https://docs.google.com/presentation/d/1bo5uxaS4JMNt99VmsRdeTFLo9qSIByJiViIVakzF9NQ/edit#slide=id.g22b104eecb9_0_2
很難 debug agent failure:
failure token
CAPCHA
三種 Evaluation 方式
抓下一堆 dataset
Alex: https://twitter.com/alexreibman
Gurkaran: https://twitter.com/aigsingh
dare: https://twitter.com/dariusemrani
Jesse Hu https://twitter.com/huyouare
Q&A
What is the most affordable (free, local?) LLM for specific Agent Executor /
Agent task like decision making, tool selection…?
Mpt7b
In my experience, the OpenAI functions work really well in deciding what tool(s) to use even in multi-step scenarios. Do you think that a train-of-thought process is used behind the scenes, like ReACT or MLKR? And how useful are they now?
可以考慮看看 few shot
其他
Agent Hackathon
https://lablab.ai/event/ai-agents-hackathon
AgentEval (第一名)
最後有 OpenAI CEO 演講