Beautiful is better than ugly.
The frontier of AI Language Models awaits exploration.
We, Pythonistas, face choices on how to use these tools.
Advanced models like GPT-4, BARD, and LLaMa generate human-like responses.
The nature of Language Models is fear,
But tools like TransformerLens show The Way.
Understanding The Model is possible.
The nature of Language Models is excitement.
Using them out of the box is one option.
Prompt engineering is another.
ChatGPT plugins and LangChain offer a third choice.
Fine-tuning them presents a fourth.
Training them from scratch is the fifth option.
Not using them at all is the final option. It may be safer.
The output for one LM is the prompt for another.
While openai is an excellent library, and
LangChain composes language models and utilities.
GPT's plugin system also composes language models and utilities, and
There should be one-- and preferably only one --obvious way to do it.
This talk explores the complex frontier of Language Modeling, recognizing the importance of mitigating risks and ethical considerations while exploring opportunities and challenges. It provides practical examples using OpenAI, LangChain, and GPT's plugin system to showcase the different ways to use these powerful tools.
The Capability in AI focus is provided by the comparison between LangChain and the GPT Plugin System.
The Ethics in AI focus is provided by demonstrating the TransformerLens library.
TransformerLens allows surgeon-like analysis of language model internals. This mechanistic Interpretability library created by the London-based Researcher, Neel Nanda, This talk will compare the abundant code-quality linting and tooling available for static and dynamic analysis of code, to the comparatively weaker MLOps ecosystem, and the almost non-existent MLSafetyOps (Mechanistic Interpretability) ecosystem.
This talk will propose a greater focus on open-source interpretability tooling.