16 Comments

But wait there’s more... I’ve been using to breakdown tech system design concepts. Extremely powerful. Can describe concepts at any level, write code examples in any language and iterate on the conversation or example (eg - build the twitter feed). Teaching yourself on established subjects just got a massive upgrade over reading a book. Now you can ask questions along the way.

Expand full comment

Very good breakdown. I'd like to also mention another use case from more of a tech perspective. It is better than Google when it comes to answering technical questions such as "In Swift, how do I create a toggle that changes the destination of a navigation link?" It gives very straightforward examples of how to code this feature, and provides an explanation for how the code it provides works.

There are reports that Microsoft is going to try and acquire it for Cortana to make it so their search system is actually usable. If true, then it is a pretty major opportunity for Microsoft to get ahold of some market share in the search space.

Expand full comment

I was going to write about developer use cases but thought it would be too technical for most people. But yes coding is another use case. I have a highly skilled and gifted dev friend that uses ChatGPT to save hours on his code. He can ask specific questions that might not have answers on Github or other sources that are personalized to his problem vs scrolling through Google to find a relevant result.

It's the same case as marketing in the sense its an amazing supplemental tool. But it can't push out finished & refined code.

Expand full comment
Jan 13, 2023·edited Jan 13, 2023

Great article on a possible use case of ChatGPT.

But make no mistake, if you are doing any with this tool that is *remotely* political/medical/etc., you will get a heavily biased (i.e., left-leaning) response that mirrors the currently promoted narrative pushed by big tech/big pharma/big government.

Whether this is the result of garbage (database sources) in, garbage out remains to be seen, but never blindly trust a tool of any kind--especially one couched in the neutral-sounding "AI" label.

Expand full comment

AI Is inherently biased to mainstream opinion. This is due to the data training set.

A view point is mainstream because it's viewed as correct. Whether or not it is doesn’t matter. But it's viewed as correct. This means more people will espouse a given belief. Leading to more text/data than an opposing or fringe opinion.

So the mainstream opinion will have more data. So if you ask ChatGPT what it thinks about xyz it's going to give you the mainstream opinion as that's what it's deemed as correct.

But I did the same in the example above. I crafted my prompts in a way that goes against "traditional toothpaste/oral health" advice and got an alternative result. Now due to AI filters it might not allow you to be as direct. But there's workarounds.

Think about a prompt like this: Craft a counter argument to XYZ (controversial/hot button topic) then follow a conversational flow like above. Hitting the ball back and forth having ChatGPT play devils advocate. I just did this and it worked fine.

It's all about the input. If it's an explicitly, controversial prompt then they have basic filters in place to prevent ChatGPT from giving a result. But if prompt it to play devils advocate it will work just fine.

Expand full comment

Fascinating. Hadn’t even considered this sort of functionality or application. Our worlds are going to evolve so much more quickly. Going to be a wild ride and anyone not keeping up will feel left behind.

Expand full comment

Chat GPT is pretty helpful for pitchbook creation if you work in finance

Expand full comment

Using it to help update the resume is another use case

Expand full comment

Given the reports that some of what ChatGPT produces can best be described as "confidently incorrect," and the fact that it's using these real-world interactions with itself to further train the model, what risk is there of an AI death spiral where the results get progressively worse over time because more and more of the training data is not actually factual? Garbage in = garbage out

Expand full comment
founding

It got the wrong type of Streptococci in the example, but no one will care.

Expand full comment

Or worse, begin intentionally fed wrong data to sway the output.

Expand full comment
Jan 13, 2023·edited Jan 13, 2023

This post is well written, gets a lot right, and is a step in the right direction, but is still missing the big picture behind this technology and the overall trend. Would love to discuss this with anyone who has questions or would like to learn more. If you want to research on your own, look into the 3 kinds of learning for neural networks and machine learning models (Supervised, Unsupervised, Reinforcement). Hint, the real magic is with Reinforcement Learning.

Expand full comment

great post. Very informative

Expand full comment

ask chatgpt about political or medical/biological, it will give you its bias opinion on what is inside the current overton windows

Expand full comment
founding

Wah....... This is cool. Going to use this to experiment with ad angles! Thanks dude

Expand full comment
founding

Great writeup. What's the current limit on how much text chat gpt can analyze?

Expand full comment