Now Reading
AI-enhanced improvement makes me extra formidable with my tasks

AI-enhanced improvement makes me extra formidable with my tasks

2023-03-30 23:45:55

AI-enhanced improvement makes me extra formidable with my tasks

The factor I’m most enthusiastic about in our bizarre new AI-enhanced actuality is the way in which it permits me to be extra formidable with my tasks.

As an skilled developer, ChatGPT (and GitHub Copilot) save me an infinite quantity of “figuring issues out” time. For all the things from writing a for loop in Bash to remembering the way to make a cross-domain CORS request in JavaScript—I don’t must even look issues up any extra, I can simply immediate it and get the suitable reply 80% of the time.

This doesn’t simply make me extra productive: it lowers my bar for when a venture is value investing time in in any respect.

Previously I’ve had loads of concepts for tasks which I’ve dominated out as a result of they’d take a day—or days—of labor to get to a degree the place they’re helpful. I’ve sufficient different stuff to construct already!

But when ChatGPT can drop that right down to an hour or much less, these tasks can all of the sudden turn into viable.

Which implies I’m constructing all kinds of bizarre and attention-grabbing little issues that beforehand I wouldn’t have invested the time in.

I’ll describe my newest considered one of these mini-projects intimately.

Utilizing ChatGPT to construct a system to archive ChatGPT messages

I take advantage of ChatGPT so much, and I need my very own archive of conversations I’ve had with it.

There’s no API for retrieving earlier conversations in the meanwhile, however in the event you open up the browser DevTools and watch the community tab you’ll be able to see it loading JSON any time you navigate to an older dialog.

ChatGPT with the Chrome inspector open. On the left is a conversation, on the right the Chrome inspector is showing the underlying JSON that was fetched for that conversation in the Network pane.

This gave me an concept for an answer…

What if I may run some JavaScript on the ChatGPT website that grabbed that JSON knowledge and despatched it off to my very own server?

I couldn’t discover the uncooked knowledge in JavaScript variables obtainable to the console, so I got here up with an alternate method: may I patch the default fetch() operate to intercept calls made by the ChatGPT code to that endpoint, and intercept the information that manner?

Usually that is the place the venture would finish. Determining the way to patch fetch() like that’s non-trivial—the place I’m utilizing the software program engineer’s definition of “non-trivial” as that means “I don’t know the way to do this off the highest of my head”. I’d should do some pondering and analysis! I’m already tempted to drop this concept and work on one thing else.

However now I’ve ChatGPT… so I prompted it:

Write a brand new JavaScript fetch() operate which calls the previous one but additionally saves the JSON that’s retrieved—whether it is used to retrieve JSON—in a window.fetchedData object

It gave me some code, nevertheless it wasn’t fairly what I wished. So I added a follow-up immediate:

No I want to switch the previous window.fetch() operate totally—I wish to patch it in order that I can intercept calls made by different present code

And it produced the next:

// Outline a world object to retailer fetched JSON knowledge
window.fetchedData = {};

// Retailer a reference to the unique fetch operate
const originalFetch = window.fetch;

// Outline a brand new fetch operate that wraps the unique fetch
window.fetch = async operate(url, choices) {
  strive {
    // Name the unique fetch operate
    const response = await originalFetch(url, choices);

    // Examine if the response has a JSON content material kind
    const contentType = response.headers.get('content-type');
    if (contentType && contentType.contains('software/json')) {
      // If the response is JSON, clone the response so we will learn it twice
      const responseClone = response.clone();
      // Parse the JSON knowledge and put it aside to the fetchedData object
      const jsonData = await responseClone.json();
      window.fetchedData[url] = jsonData;
    }

    // Return the unique response
    return response;
  } catch (error) {
    // Deal with any errors that happen in the course of the fetch
    console.error('Error fetching and saving JSON:', error);
    throw error;
  }
};

That’s precisely what I wished!

I pasted it into the DevTools console on the ChatGPT website and clicked on some hyperlinks. It labored as meant.

Right here’s that full ChatGPT transcript that received me thus far.

So now I’ve a prototype! I’ve proved to myself that I can intercept the JSON knowledge fetched by ChatGPT’s personal net software code.

I solely wished to run my code on requests that matched https://chat.openai.com/backend-api/dialog/...—I may write a regex for that, however I’d have to recollect to flee the required characters. ChatGPT did that for me too:

const sample = /^https://chat.openai.com/backend-api/dialog//;

So now I’ve the important thing constructing blocks I want for my resolution: I can intercept JSON fetches after which filter to simply the information from the endpoint I care about.

However I want CORS help

My plan was to have my intercepting fetch() name POST the JSON knowledge to my very own Datasette Cloud occasion, much like how I report scraped Hacker Information listings as described in this post about Datasette’s new write API.

One large drawback: which means that code operating on the chat.openai.com area must POST JSON to a different server. And meaning the opposite server must be serving CORS headers.

Datasette Cloud doesn’t (but) help CORS—and I wasn’t about to implement a brand new manufacturing function there simply so I may remedy an issue for this prototype.

What I actually wanted was some type of CORS proxy… a URL operating someplace which forwards site visitors on to Datasette Cloud however provides CORS headers to allow it to be known as from elsewhere.

This represents one other potential blocking level: do I actually wish to implement a complete proxy net software only for this little venture?

Right here’s my subsequent ChatGPT immediate:

Write an internet app in python Starlette which has CORS enabled—together with for the auth header—and forwards all incoming requests to any path to a different server host laid out in an atmosphere variable

I like Starlette and I do know it has CORS help and is nice for writing proxies. I used to be hopeful that GPT-4 had seen its documentation earlier than the September 2021 closing date.

ChatGPT wrote me some very stable code! It used the python-dotenv bundle although, and I didn’t really feel like utilizing that dependency. So I added:

Rewrite that to make use of os.environ

The exception dealing with regarded unsuitable to me, so I stated:

Catch httpx.HTTPError as the one exception you care about

And after simply these three prompts it produced the next:

import os
from starlette.purposes import Starlette
from starlette.middleware import Middleware
from starlette.middleware.cors import CORSMiddleware
from starlette.routing import Route
from starlette.requests import Request
from starlette.responses import JSONResponse
import httpx

TARGET_SERVER_HOST = os.environ.get("TARGET_SERVER_HOST")

middleware = [
    Middleware(
        CORSMiddleware,
        allow_origins=["*"],
        allow_credentials=True,
        allow_methods=["*"],
        allow_headers=["*"],
    )
]

app = Starlette(middleware=middleware)

async def forward_request(request: Request):
    strive:
        async with httpx.AsyncClient() as consumer:
            target_url = f"{TARGET_SERVER_HOST}{request.url.path}"
            methodology = request.methodology
            headers = dict(request.headers)
            if "host" in headers:
                del headers["host"]
            if request.methodology in ["POST", "PUT", "PATCH"]:
                request_body = await request.json()
            else:
                request_body = None

            response = await consumer.request(
                methodology, target_url, headers=headers, json=request_body
            )
            return JSONResponse(response.json(), status_code=response.status_code)
    besides httpx.HTTPError as e:
        return JSONResponse({"error": str(e)}, status_code=500)

app.add_route("/{path:path}", forward_request, strategies=["GET", "POST", "PUT", "PATCH", "DELETE"])

if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="0.0.0.0", port=8000)

I examined this domestically… and noticed a bug. It will fail with a 500 error if the incoming request included a content-length header. I mounted that myself by including this code:

if "content-length" in headers:
    del headers["content-length"]

My finished code is here. Right here’s the ChatGPT transcript.

I deployed this to Vercel utilizing the strategy described in this TIL—and now I had a working proxy server.

Creating the tables and a token

ChatGPT had received me a great distance. The remainder of my implementation was now a sufficiently small carry that I may rapidly end it on my own.

I created two tables in my Datasette Cloud occasion by executing the next SQL (utilizing the datasette-write plugin):

create desk chatgpt_conversation (
  id textual content major key,
  title textual content,
  create_time float,
  moderation_results textual content,
  current_node textual content,
  plugin_ids textual content
);
create desk chatgpt_message (
  id textual content major key,
  conversation_id textual content references chatgpt_conversation(id),
  author_role textual content,
  author_metadata textual content,
  create_time float,
  content material textual content,
  end_turn integer,
  weight float,
  metadata textual content,
  recipient textual content
);

Then I made myself a Datasette API token with permission to insert-row and update-row only for these two tables, utilizing the brand new finely grained permissions feature within the 1.0 alpha sequence.

The final step was to mix this all collectively right into a fetch() operate that did the suitable factor. I wrote this code by hand, utilizing the ChatGPT prototype as a place to begin:

const TOKEN = "dstok_my-token-here";

// Retailer a reference to the unique fetch operate
window.originalFetch = window.fetch;

// Outline a brand new fetch operate that wraps the unique fetch

window.fetch = async operate (url, choices) {
  strive {
    // Name the unique fetch operate
    const response = await originalFetch(url, choices);

    // Examine if the response has a JSON content material kind
    const contentType = response.headers.get("content-type");
    if (contentType && contentType.contains("software/json")) {
      // If the response is JSON, clone the response so we will learn it twice
      const responseClone = response.clone();
      // Parse the JSON knowledge and put it aside to the fetchedData object
      const jsonData = await responseClone.json();
      // NOW: if url for https://chat.openai.com/backend-api/dialog/...
      // do one thing very particular with it
      const sample =
        /^https://chat.openai.com/backend-api/dialog/(.*)/;
      const match = url.match(sample);
      if (match) {
        const conversationId = match[1];
        console.log("conversationId", conversationId);
        console.log("jsonData", jsonData);
        const dialog = {
          id: conversationId,
          title: jsonData.title,
          create_time: jsonData.create_time,
          moderation_results: JSON.stringify(jsonData.moderation_results),
          current_node: jsonData.current_node,
          plugin_ids: JSON.stringify(jsonData.plugin_ids),
        };
        fetch(
          "https://starlette-cors-proxy-simonw-datasette.vercel.app/knowledge/chatgpt_conversation/-/insert",
          {
            methodology: "POST",
            headers: {
              "Content material-Kind": "software/json",
              Authorization: `Bearer ${TOKEN}`,
            },
            mode: "cors",
            physique: JSON.stringify({
              row: dialog,
              exchange: true,
            }),
          }
        )
          .then((d) => d.json())
          .then((d) => console.log("d", d));
        const messages = Object.values(jsonData.mapping)
          .filter((m) => m.message)
          .map((message) => {
            m = message.message;
            let content material = "";
            if (m.content material) {
              if (m.content material.textual content) {
                content material = m.content material.textual content;
              } else {
                content material = m.content material.components.be a part of("n");
              }
            }
            return {
              id: m.id,
              conversation_id: conversationId,
              author_role: m.writer ? m.writer.position : null,
              author_metadata: JSON.stringify(
                m.writer ? m.writer.metadata : {}
              ),
              create_time: m.create_time,
              content material: content material,
              end_turn: m.end_turn,
              weight: m.weight,
              metadata: JSON.stringify(m.metadata),
              recipient: m.recipient,
            };
          });
        fetch(
          "https://starlette-cors-proxy-simonw-datasette.vercel.app/knowledge/chatgpt_message/-/insert",
          {
            methodology: "POST",
            headers: {
              "Content material-Kind": "software/json",
              Authorization: `Bearer ${TOKEN}`,
            },
            mode: "cors",
            physique: JSON.stringify({
              rows: messages,
              exchange: true,
            }),
          }
        )
          .then((d) => d.json())
          .then((d) => console.log("d", d));
      }
    }

    // Return the unique response
    return response;
  } catch (error) {
    // Deal with any errors that happen in the course of the fetch
    console.error("Error fetching and saving JSON:", error);
    throw error;
  }
};

The fiddly bit right here was writing the JavaScript that reshaped the ChatGPT JSON into the rows: [array-of-objects] format wanted by the Datasette JSON APIs. I may most likely have gotten ChatGPT to assist with that—however on this case I pasted the SQL schema right into a remark and let GitHub Copilot auto-complete components of the JavaScript for me as I typed it.

And it really works

Now I can paste the above block of code into the browser console on chat.openai.com and any time I click on on considered one of my older conversations within the sidebar the fetch() shall be intercepted and the JSON knowledge shall be saved to my Datasette Cloud occasion.

A public demo

I’ve arrange a public demo exposing messages from chosen conversations right here:

simon.datasette.cloud/data/chatgpt_public_messages

The demo itself is powered by an additional desk (itemizing the conversations that needs to be public) and a SQL view.

I used the datasette-write plugin once more to create these:

create desk chatgpt_public (id textual content major key);

create view chatgpt_public_messages as choose
  chatgpt_message.id,
  chatgpt_conversation.title || char(10) || chatgpt_conversation.id as dialog,
  chatgpt_message.author_role,
  chatgpt_message.content material,
  datetime(chatgpt_message.create_time, 'unixepoch') as create_time
from
  chatgpt_message be a part of chatgpt_conversation on conversation_id = chatgpt_conversation.id
the place
  chatgpt_message.create_time isn't null
  and conversation_id in (choose id from chatgpt_public)
order by
  chatgpt_message.create_time

Then I set the chatgpt_public_messages view to be public (utilizing datasette-public).

Now I can insert dialog IDs into that chatgpt_public desk to show their messages within the public view.

That is the primary time I’ve used a SQL view like this to selectively publish knowledge from a personal bigger desk, and I believe it’s a extremely neat sample. I’d prefer to make it simpler to do with out writing customized SQL although!

It’s much more than simply this venture

This ChatGPT archiving drawback is only one instance from the previous few months of issues I’ve constructed that I wouldn’t have tackled with out AI-assistance.

It took me longer to jot down this up than it did to implement the whole venture from begin to end!

When evaluating if a brand new know-how is value studying and adopting, I’ve two standards:

  1. Does this let me construct issues that may have been unattainable to construct with out it?
  2. Can this cut back the hassle required for some tasks such that they tip over from “not value it” to “value it” and I find yourself constructing them?

Giant language fashions like GPT3/4/LLaMA/Claude and so on clearly meet each of these standards—and their affect on level two retains on getting stronger for me.

Some extra examples

Listed here are just a few extra examples of tasks I’ve labored on just lately that wouldn’t have occurred with out a minimum of some stage of AI help:

Source Link

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top