Chat GPT - How careful should we be?

The Monday Deep Dive

You’ve all seen my post.

How excited I am that Chat GPT can 10 x your productivity

But what are the dangers and how should we really be thinking about Chat GPT (Open AI) as Procurement Pros?

For this article I brought in someone with far deeper knowledge than me…James Garforth. All views and opinions expressed in this article are his own (independent from the business he works for).

James Garforth, IBM Consulting - Procurement Outsourcing Services Delivery Leader

What are the dangers of using ChatGPT to enhance my productivity as a Procurement Pro?

James - “The whole point of that Large Language Model is it gets trained on the information everyone’s putting into it.

So whatever questions you ask and then follow up, it’s ingesting them. So, there’s no walled garden to protect the confidentiality.

The danger is you don’t know what other people will then use the information you’ve fed into the tool for. The information you’ve put in may well come up with an answer for somebody else that could be used inappropriately.

You can’t guarantee the integrity of the information with Chat GPT.

Secondly people post things on LinkedIn like prompt libraries but prompt engineering is so difficult for the people who don’t fully understand it.

It’s evolving so fast.

So, for example Llama 3.1 was released on Wednesday with X billion more parameters. You could have asked Lama prompts on the Tuesday and by the Wednesday the same prompts would have given you totally different answers.

Plenty of people who have worked in tech Procurement, who have dealt with data quality or data security issues will be more savvy.

But others might not be. There’s a nuance that the danger can be overcome if we’re careful and aware - careful of what we put in, and experienced enough to sense check and critique the outputs.”

So should we not be using it?

James - “AI is going to change the way we work. So, in the UK, for example, we’re something like 65% knowledge workers, so in theory the whole economy will be impacted by the introduction of AI.

Warning people off if probably isn’t the answer, because they they’ll be scared to get involved or to drive the benefits from it. It’s a nuanced position of just being cautious and building are awareness of how Open AI works.”

How would you advise a Procurement Manager to approach Chat GPT then?

James - “My advice isn’t to stay clear but instead to look across your process architecture or your 7 step sourcing process and hone in on the elements when you can get benefits from AI while mitigating the most risk.”

It’s like an advanced Google Search isn’t it?

James - “Correct, but what I like about open AI is you can dig into the information provided depending on what you initially get back.

For example you might ask a question about a Procurement topic and then you can go deeper and ask something like ‘what are the typical KPIs in this space’. It’s a no harm approach to digging for information as long as you then qualify that information with your own experience or apply it to the specifics of your situation”.

So, recently I had just 15 minutes to prepare for an Escrow negotiation in a supplier discussion and I asked Chat GPT to tell me the main considerations. Is that an appropriate use of Chat GPT?

James - “Yes I’d say so 100% but I’m not the arbiter of what’s right and what’s wrong. The only danger in that situation is that it’s using open models and so you don’t know what it’s been trained for.

It might not have been totally accurate or may have given a bum steer,

So, you have to apply a bit of a sniff test. Like…am I going to lose credibility in the meeting because what it’s told me is untrue?”

So, I guess because I have plenty of experience in Procurement, it’s less dangerous for me than if a 20yr old starting out were to be asking the same question?

James -”I would say yes it definitely becomes more dangerous with less experience but I would want to put the more junior Procurement individual off from using it.

Specifically trained AI for specific types of tasks, including things like personal development and having a tool as a pseudo technological mentor or guide isn’t necessarily a bad thing, it’s just about encouraging people to think about whether the information provides is right in respect of

  • What my company policy says we should be doing.

  • What our ethical requirements are

  • Is the information right?

  • Does it make sense?

  • Am I in danger of breaching a process if I follow the advices.

To sum up…

What brilliant balanced perspective from James.

It’s about applying the information to the specific situation and company you are in.

The term I’m going to use is to be cautiously creative with how you use it.

And that’s where the emotional intelligence (human skills) have to be overlaid.

The danger lies in people saying things like A.I. is going to do it all for you. Job done.

Let me know your thoughts…

Reply

or to participate.