Now Reading
llm, ttok and strip-tags—CLI instruments for working with ChatGPT and different LLMs

llm, ttok and strip-tags—CLI instruments for working with ChatGPT and different LLMs

2023-05-18 16:05:26

llm, ttok and strip-tags—CLI instruments for working with ChatGPT and different LLMs

I’ve been constructing out a small suite of command-line instruments for working with ChatGPT, GPT-4 and probably different language fashions sooner or later.

The three instruments I’ve constructed to date are:

  • llm—a command-line instrument for sending prompts to the OpenAI APIs, outputting the response and logging the outcomes to a SQLite database. I launched that llm a few weeks ago.
  • ttok—a instrument for counting and truncating textual content primarily based on tokens
  • strip-tags—a instrument for stripping HTML tags from textual content, and optionally outputting a subset of the web page primarily based on CSS selectors

The concept with these instruments is to assist working with language mannequin prompts utilizing Unix pipes.

You’ll be able to set up the three like this:

pipx set up llm
pipx set up ttok
pipx set up strip-tags

Or use pip when you haven’t adopted pipx but.

llm depends upon an OpenAI API key within the OPENAI_API_KEY atmosphere variable or a ~/.openai-api-key.txt textual content file. The opposite instruments don’t require any configuration.

Now let’s use them to summarize the homepage of the New York Occasions:

curl -s 
  | strip-tags .story-wrapper 
  | ttok -t 4000 
  | llm --system 'abstract bullet factors' -s

Right here’s what that command outputs whenever you run it within the terminal:

Animated output from running that command: 1. Senator Dianne Feinstein suffered complications from encephalitis during her recent bout with shingles, which has raised concerns about her health among some of her allies. 2. Investors, economists, and executives are preparing contingency plans in case of a possible United States debt default, but the timeline for when the government will run out of cash is uncertain. 3. The Pentagon has freed up an additional $3 billion for Ukraine through an accounting mistake, relieving pressure on the Biden administration to ask Congress for more money for weapon supplies. 4. Explosions damaged a Russian-controlled freight train in Crimea, and the railway operator has suggested that it may have been an act of sabotage, but there is no confirmation yet from Ukrainian authorities. 5. Group of Seven leaders are expected to celebrate the success of a novel effort to stabilize global oil markets and punish Russia through an untested oil price cap.

Let’s break that down.

  • curl -s makes use of curl to retrieve the HTML for the New York Occasions homepage—the -s choice prevents it from outputting any progress data.
  • strip-tags .story-wrapper accepts HTML to straightforward enter, finds simply the areas of that web page recognized by the CSS selector .story-wrapper, then outputs the textual content for these areas with all HTML tags eliminated.
  • ttok -t 4000 accepts textual content to straightforward enter, tokenizes it utilizing the default tokenizer for the gpt-3.5-turbo mannequin, truncates to the primary 4,000 tokens and outputs these tokens transformed again to textual content.
  • llm --system 'abstract bullet factors' -s accepts the textual content to straightforward enter because the person immediate, provides a system immediate of “abstract bullet factors”, then the -s choice tells the instrument to stream the outcomes to the terminal as they’re returned, moderately than ready for the total response earlier than outputting something.

It’s all concerning the tokens

I constructed strip-tags and ttok this morning as a result of I wanted higher methods to work with tokens.

LLMs equivalent to ChatGPT and GPT-4 work with tokens, not characters.

That is an implementation element, however they’re one you could’t keep away from for 2 causes:

  1. APIs have token limits. In case you try to ship greater than the restrict you’ll get an error message like this one: “This mannequin’s most context size is 4097 tokens. Nonetheless, your messages resulted in 116142 tokens. Please cut back the size of the messages.”
  2. Tokens are how pricing works. gpt-3.5-turbo (the mannequin utilized by ChatGPT, and the default mannequin utilized by the llm command) prices $0.002 / 1,000 tokens. GPT-4 is $0.03 / 1,000 tokens of enter and $0.06 / 1,000 for output.

Having the ability to preserve monitor of token counts is admittedly necessary.

However tokens are literally actually onerous to depend! The rule of thumb is roughly 0.75 * number-of-words, however you will get an actual depend by working the identical tokenizer that the mannequin makes use of by yourself machine.

OpenAI’s tiktoken library (documented in this notebook) is one of the best ways to do that.

My ttok instrument is a very thin wrapper round that library. It could possibly do three various things:

  • Rely tokens
  • Truncate textual content to a desired variety of tokens
  • Present you the tokens

Right here’s a fast instance displaying all three of these in motion:

$ echo 'Right here is a few textual content' | ttok
$ echo 'Right here is a few textual content' | ttok --truncate 2
Right here is
$ echo 'Right here is a few textual content' | ttok --tokens    
8586 374 1063 1495 198

My GPT-3 token encoder and decoder Observable pocket book supplies an interface for exploring how these tokens work in additional element.

Stripping tags from HTML

HTML tags take up a whole lot of tokens, and often aren’t related to the immediate you might be sending to the mannequin.

My new strip-tags command strips these tags out.

Right here’s an instance displaying fairly how a lot of a distinction that may make:

$ curl -s https://simonwillison.internet/ | ttok
$ curl -s https://simonwillison.internet/ | strip-tags | ttok

For my weblog’s homepage, stripping tags reduces the token depend by greater than half!

The above continues to be too many tokens to ship to the API.

See Also

We might truncate them, like this:

$ curl -s https://simonwillison.internet/ 
  | strip-tags | ttok --truncate 4000 
  | llm --system 'flip this into a nasty poem' -s

Which outputs:


A instrument to obtain ECMAScript modules.

Get your packages straight from CDN,

No want for construct scripts, let that burden finish.

All dependencies shall be fetched,

Import statements shall be re-writched.

Works like a appeal, easy and glossy,

JavaScript simply acquired a complete lot extra stylish.

However ofter it’s solely particular components of a web page that we care about. The strip-tags command takes an optionally available listing of CSS selectors as arguments—if offered, solely these components of the web page shall be output.

That’s how the New York Occasions instance works above. Evaluate the next:

$ curl -s | ttok             
$ curl -s | strip-tags | ttok
$ curl -s | strip-tags .story-wrapper | ttok

By deciding on simply the textual content from throughout the <part class="story-wrapper"> components we will trim the entire web page down to only the headlines and summaries of every of the principle articles on the web page.

Future plans

I’m actually having fun with having the ability to use the terminal to work together with LLMs on this method. Having a fast approach to pipe content material to a mannequin opens up every kind of enjoyable alternatives.

Need a fast rationalization of how some code works utilizing GPT-4? Do this:

cat ttok/ | llm --system 'Clarify this code' -s --gpt4

(Output here).

I’ve been having enjoyable piping my shot-scraper tool into it too, which works a step additional than strip-tags in offering a full headless browser.

Right here’s an instance that makes use of the Readability recipe from this TIL to extract the principle article content material, then additional strips HTML tags from it and pipes it into the llm command:

shot-scraper javascript "
async () => {
    const readability = await import('');
    return (new readability.Readability(doc)).parse().content material;
}" | strip-tags | llm --system summarize

When it comes to subsequent steps, the factor I’m most enthusiastic about is educating that llm command learn how to discuss to different fashions—initially Claude and PaLM2 by way of APIs, however I’d like to get it working in opposition to regionally hosted fashions working on issues like llama.cpp as properly.

Source Link

What's Your Reaction?
In Love
Not Sure
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top