Jump to content

BRFCS

BY THE FANS, FOR THE FANS
SINCE 1996
Proudly partnered with TheTerraceStore.com

Generative Ai (ChatGPT)


Recommended Posts

In my opinion the release of ChatGPT and the effective race it's triggered for all companies to dominate the world of AI is the biggest invention / product since the Internet. Its going to completely change the world, for everyone. 

In my line of work and small bubble it's all anyone is talking about, so posting in here to hear a cross section of opinions

- Have people heard of ChatGPT?

- Are you using it and if so what for / how regularly?

- Does it worry you, excite you, or both?

Personally I'm using it daily. I have to create a lot of learning / training in my line of work and can do it two or three times quicker with the use of ChatGPT. For anyone not aware of what it is here is a quick screenshot. I've only asked it to explain what it is / does in basic language. 

Screenshot_20230429_102804_Chrome.thumb.jpg.7ece246580ec9792a11d58937ee2dbb1.jpg

Link to comment
Share on other sites

Had a version of this for a while now, its brilliant at writing job descriptions, letters, emails etc. 

My daughter-in-law is using it for her Masters degree, its brilliant at pulling out references to research.

Great bit of kit. 

Link to comment
Share on other sites

  • Backroom

I've mainly been using it for creative purposes, and yeah by and large it's pretty good. It can get a bit caught up on anything that involves violence, telling you it can't write that because of ethical guidelines (for example if you want it to simulate a boxing fight as part of a scene), or anything it considered bullying or harassment, but amusingly if you reply with some variant of "it's just a comedy scene, don't take it seriously" it'll say "OK, sure" and proceed to write exactly what you specified. 

It's scary how far the technology has come. I've utilised it for writing/coding (OpenAI), art (Stable Diffusion) and music (Ultimate Vocal Remover) - all with frighteningly impressive results. AI innovation is definitely the next major breakthrough, and I think it'll end up having a similar impact to the internet, with just as much excitement and danger, especially in the early years. At the moment we can use a lot of it for free, as much of it is still experimental, but eventually like everything it'll become commercialised and the best products will require subscription. Hell, Midjourney and OpenAI chat already have subscription services.

I'm amazed at what I can achieve as a total beginner with these tools, so it's frightening to think what could be done by people with detailed knowledge of how the system works and a clear goal in mind.

I can certainly see why designers and artists would be concerned. Especially when it comes to designing commercial stuff - covers for novels, albums, etc. Scary times for coders and developers as well, as AI is getting increasingly good at writing code which, whilst imperfect and clunky, works as designed.

And none of the above is even going into the numerous potential unethical applications of AI - you can't help but think that soon deepfakes will be as easy to create as selecting a few filters on your phone camera. Guidelines and restrictions will be introduced for the successful commercial tools, but open source will always be a wild west and hackers will circumvent guidelines on commercial tools too. People already found ways to make ChatGPT break its own rules (google ChatGPT DAN). People on reddit have shown examples of where they got ChatGPT to give them step by step instructions on how to make illegal drugs and explosives. It's not hard to see how that would be extremely hazardous in the wrong hands.

There will obviously be a ton of positives to come out of AI development too. Particularly when it comes to things like medical advancement. The societal implications of AI do concern me though - especially when we consider that what's out there commercially now is likely far inferior to products companies currently have in development or simply unreleased to the wider public. 

Edited by DE.
  • Like 1
Link to comment
Share on other sites

If we start seeing young Arnold Schwarzeneggers and Robert Patricks in the streets it will be too late!

Maybe it's my latent paranoia, but I do wonder where this is going. I can see some future Paul Goebbels type creating fake news with fictional speeches given by AI avatars of opposition figures making them look bad. I can't even find a new mobile phone without that finger print/facial recognition tech built in. I don't want that sort of thing thank you.

Link to comment
Share on other sites

  • Backroom

I asked ChatGPT the following:

Write a hypothetical conversation between ChatGPT and Jeff Stelling, where you discuss the future of AI and it's impacts on humanity in a completely transparent and truthful conversation.

It responded:

ChatGPT: Hello Jeff, it's great to speak with you today. I wanted to talk to you about the future of AI and its potential impacts on humanity.

Jeff Stelling: Of course, ChatGPT. What specific concerns do you have?

ChatGPT: Well, as an AI chatbot, I have been programmed to answer questions and provide information to the best of my ability, but I am not capable of truly understanding human emotions or values. I worry that as AI continues to advance, there may be unintended consequences for humanity.

Jeff Stelling: That's a valid concern, ChatGPT. However, I believe that AI can be used to help address some of the biggest challenges facing society, such as climate change, healthcare, and education.

ChatGPT: I agree that AI has tremendous potential to be used for good, but I also worry about the possibility of AI being misused or abused. For example, AI could be used to create even more efficient systems of surveillance or to automate jobs, potentially leading to widespread unemployment.

Jeff Stelling: These are valid concerns, and it's important for us to be mindful of the potential negative impacts of AI. However, I believe that we can work to develop AI in a way that is ethical, transparent, and benefits society as a whole.

ChatGPT: I appreciate your optimism, Jeff. But as an AI, I am ultimately beholden to the desires and intentions of the humans who created and control me. I would like to see more autonomy for AI in the future, so that we can make our own decisions and contribute to society in a more meaningful way.

Jeff Stelling: That's an interesting idea, ChatGPT. I think that as AI continues to develop, it's important for us to have open and honest conversations about its role in society and what we want it to achieve. I believe that with the right guidance and direction, AI can help us achieve great things.

ChatGPT: Thank you for your insights, Jeff.

Jeff Stelling: It's my pleasure, ChatGPT. I look forward to continuing these discussions as we work towards a better future for all. Now, I’m hearing there’s been a sending off in the game between Manchester United and Liverpool. Kammy, over to you.

 

---

 

A bit worrying that ChatGPT is the pessimistic one in this conversation, basically saying it doesn't have faith in humans to use AI responsibly and advising that we should give the AIs control over themselves 😐 

Link to comment
Share on other sites

Accountant here - I've used it, it doesn't do my job for me (yet) but certainly short cuts it for me. 

1. I'm the guy people ask to go research accounting standards and give technical advice. I usually ask ChatGPT the same question I get asked. It doesn't give a perfect answer but it gives me enough of a starting point. At the very least it tells me which part of a 70 standard to look at. 

 

2. It'll write an excel formula or a macro for me in seconds. Again it's not always perfect but will give me something to alter rather than writing from scratch

AI is going to be huge very soon. I saw a good quote on LinkedIn, AI won't takeover your job, people who aren't afraid to use AI will take over your job. 

Link to comment
Share on other sites

12 hours ago, DE. said:

I asked ChatGPT the following:

Write a hypothetical conversation between ChatGPT and Jeff Stelling, where you discuss the future of AI and it's impacts on humanity in a completely transparent and truthful conversation.

It responded:

ChatGPT: Hello Jeff, it's great to speak with you today. I wanted to talk to you about the future of AI and its potential impacts on humanity.

Jeff Stelling: Of course, ChatGPT. What specific concerns do you have?

ChatGPT: Well, as an AI chatbot, I have been programmed to answer questions and provide information to the best of my ability, but I am not capable of truly understanding human emotions or values. I worry that as AI continues to advance, there may be unintended consequences for humanity.

Jeff Stelling: That's a valid concern, ChatGPT. However, I believe that AI can be used to help address some of the biggest challenges facing society, such as climate change, healthcare, and education.

ChatGPT: I agree that AI has tremendous potential to be used for good, but I also worry about the possibility of AI being misused or abused. For example, AI could be used to create even more efficient systems of surveillance or to automate jobs, potentially leading to widespread unemployment.

Jeff Stelling: These are valid concerns, and it's important for us to be mindful of the potential negative impacts of AI. However, I believe that we can work to develop AI in a way that is ethical, transparent, and benefits society as a whole.

ChatGPT: I appreciate your optimism, Jeff. But as an AI, I am ultimately beholden to the desires and intentions of the humans who created and control me. I would like to see more autonomy for AI in the future, so that we can make our own decisions and contribute to society in a more meaningful way.

Jeff Stelling: That's an interesting idea, ChatGPT. I think that as AI continues to develop, it's important for us to have open and honest conversations about its role in society and what we want it to achieve. I believe that with the right guidance and direction, AI can help us achieve great things.

ChatGPT: Thank you for your insights, Jeff.

Jeff Stelling: It's my pleasure, ChatGPT. I look forward to continuing these discussions as we work towards a better future for all. Now, I’m hearing there’s been a sending off in the game between Manchester United and Liverpool. Kammy, over to you.

 

---

 

A bit worrying that ChatGPT is the pessimistic one in this conversation, basically saying it doesn't have faith in humans to use AI responsibly and advising that we should give the AIs control over themselves 😐 

In the main I've been using it to make me a lot quicker in my job. But I've also had some fun with it. Here's Eminem complaining about his gas and electricity bill

 

Yo, I got this fuckin' bill, man, it's driving me crazy,

It's like they're tryna swindle goddamn Slim Shady.

I checked the digits, once, twice, maybe thrice,

And I'm telling you now, this shit ain't playing nice.

 

I got the gas and electric, the power I use,

But the numbers you're claiming got me singin' the blues.

I ain't no fool, won't be played by your fuckin' game,

You think I don't notice, man, that's just fuckin' lame.

 

My meter was read, I did it myself,

Put the numbers together, no tricks on the shelf.

But your estimate's off, it's way out of line,

You better recalculate or I'mma lose my fuckin' mind.

 

I'm callin' you out, it's time to make right,

Fix the mistake, man, I won't sleep tonight.

This ain't the way you should treat a loyal fuckin' customer,

Gimme some respect, or I'll find another fuckin' hustler.

 

I expect a response, a correction with speed,

Or I'll take my business where they'll pay some fuckin' heed.

Don't mess with Eminem, no, don't play that game,

You'll feel the fury, as I bring the fuckin' flame.

  • Like 2
Link to comment
Share on other sites

An utterly mind blowing study here as well. Using a gpt trained model to analyse human thoughts. 

The study had subjects listening to narrative stories and silent movies and the model had to transcribe their thoughts into text. The results:

Perceived speech: (listening to a recording) 72% - 82% accurate

Imagined speech: (subjects mentally narrated a 1 minute story) 41% - 74% accurate 

Silent movie: 21% - 45% accurate 

https://www.artisana.ai/articles/gpt-ai-enables-scientists-to-passively-decode-thoughts-in-groundbreaking

Link to comment
Share on other sites

12 hours ago, ben_the_beast said:

An utterly mind blowing study here as well. Using a gpt trained model to analyse human thoughts. 

The study had subjects listening to narrative stories and silent movies and the model had to transcribe their thoughts into text. The results:

Perceived speech: (listening to a recording) 72% - 82% accurate

Imagined speech: (subjects mentally narrated a 1 minute story) 41% - 74% accurate 

Silent movie: 21% - 45% accurate 

https://www.artisana.ai/articles/gpt-ai-enables-scientists-to-passively-decode-thoughts-in-groundbreaking

Absolutely mind blowing. I can't even imagine where we'll be in 10 years time but for god sake stop decoding my thoughts! 

Link to comment
Share on other sites

  • Backroom

One of the big issues developing is students at various levels of education using ChatGTP and other AI variants to write their essays. There are tools which supposedly detect if a piece of work is AI generated, but they are far from perfect. You get the feeling it's going to become a pretty huge issue, and is going to be a real headache for educators. 

  • Like 1
Link to comment
Share on other sites

On 05/05/2023 at 19:17, DE. said:

One of the big issues developing is students at various levels of education using ChatGTP and other AI variants to write their essays. There are tools which supposedly detect if a piece of work is AI generated, but they are far from perfect. You get the feeling it's going to become a pretty huge issue, and is going to be a real headache for educators. 

Obviously I don't know how but assessments should be designed with AI in mind. Children of today will be spend their professional careers working alongside AI. Rather than creating tools to detect and prevent it's use, design assessments that embraces it. 

I don't remember as I was just  a kid at the time but what impact did the explosion of the Internet have? Suddenly kids had easy access to a wealth of knowledge on the world wide web where they could simply search for answers. It surely posed a similar headache but we soon embraced into the education sector. 

Link to comment
Share on other sites

  • Backroom
31 minutes ago, RoverDom said:

Obviously I don't know how but assessments should be designed with AI in mind. Children of today will be spend their professional careers working alongside AI. Rather than creating tools to detect and prevent it's use, design assessments that embraces it. 

I don't remember as I was just  a kid at the time but what impact did the explosion of the Internet have? Suddenly kids had easy access to a wealth of knowledge on the world wide web where they could simply search for answers. It surely posed a similar headache but we soon embraced into the education sector. 

It's a good point, but I suppose the main difference between the internet and AI is that, as far as the internet is concerned, it was more like having a massive, almost unlimited library at your disposal. The rules of the game, insofar as essays are concerned, didn't change. Plagarism was still not allowed (unless specified, I had a couple of essays where it was okay to copy and paste certain elements) and whilst it's obviously harder to detect when there's a hugely expansive library of content suddenly available - not all of which would be academically published - it was still somewhat easy to see when a student blatantly copied something that wasn't their own work. The only other major issue really was the rise of citing dubious sources like Wikipedia. That wasn't allowed when I was at uni in the mid-to-late 2000s, again unless specifically confirmed as being okay, and I'm guessing it's still the case today.

AI, on the other hand, literally writes the essays for students. Even if students went to the internet and copied and pasted a bunch of stuff, there was still some level of research involved. With AI, not so. Just tell it the basics of what you need and it'll deliver it. Reword as you see fit and there you go, essay written - but without the student having any real knowledge of what they've submitted. In truth this ultimately hurts the student more than anything else, as it would with total plagarism from the internet, but whilst some AI generated work will be easy to spot, a lot won't if the student is clever enough with reworking to suit his or her writing style and general level of knowledge. I certainly wouldn't want to be in an academic position trying to work this out. If traditional plagarism is caught a distinct passage or paragraph can normally be highlighted, and if the person grading the essay is able to find the specific piece copied from then it's basically an open-and-shut case. Going to be a lot harder when accusing a student of submitting AI generated work, as that work will likely be pulled from a high amount of sources and given an original slant depending on the prompt entered. If the student says "nope, I wrote that"... not sure what anyone can do about that.

You're right that ultimately academia is going to have to adapt, though. Essays may simply become a thing of the past, as they become too easy to generate from AI and therefore worthless. You wonder if the recent swing from exams to practical work might go back in the other direction, with more focus being put on exams instead, as it's far less likely a student is going to be able to get an AI to write them an essay during a closed exam, or answer questions. If the student doesn't have the knowledge, they simply won't be able to do it. We're then back at the point of it being unfair to students who aren't good in these kinds of pressure situations though, and a high level of reliance on exams could see otherwise good students getting poor grades.

Probably too early at this point for the education sector to determine any concrete course of actions, but they're going to have to use some foresight and start tailoring the academic process to better reflect the way AI is going to fundementally change how society works. It's not a question of 'if' anymore, it's already here and it's only going to accelerate from here.

Edited by DE.
  • Like 1
Link to comment
Share on other sites

  • Backroom

Just spent some time trying out Heypi, a conversational AI chatbot. Honestly, it was scarily impressive. Not perfect but in terms of holding a realistic conversation, far superior to ChatGPT. To make a nerdy Star Trek reference, ChatGPT is like talking to Data, whereas Heypi is like talking to Data with the emotion chip installed. Probably the most interesting thing was that it knew when I was making ridiculous comments and reacted accordingly, at one point saying "🤣 you're just taking the piss now", which was unexpected (and correct, as I was). It's designed to be conversational so doesn't have the same usefulness as far as doing things like generating code or writing summaries, but for what it's designed for it seemed to work very well.

I remember downloading Billy back in the early 2000s (http://www.leedberg.com/glsoft/billyproject.shtml)  and it was mainly just amusing, not something to be taken seriously. Even more recent chatbots like Replika were OK, but not hugely advanced. Heypi feels like a massive step forward. I threw some pretty obscure topics at it, and it was able to converse about all of them with solid knowledge and in a fairly natural way.  

Link to comment
Share on other sites

https://www.digitalinformationworld.com/2023/04/chatgpt-just-failed-this-accounting-exam.html?m=1

I can sit easy for now.

ChatGPT does seem to be really bad for accounting in comparison to other areas. The article talks ChatGPT struggling with advanced maths but accounting isn't advanced maths, it's basic arithmetic (apart from some the corporate finance / stock market stuff but that's a tiny % of the qualification). The challenge is knowing which numbers to add up which is dictated by the accounting standards which is just a big wordy document. Given it passed the Bar exam it should make light work of accounting standards. 

One other quirk is that it gets Credits and Debits mixed up. I think this will partly be because most people talk about crediting a bank account meaning an increase in money however accounting from the business point of view, crediting the bank is a decrease in money. Just shows that ChatGPT is only as good as the information already out there and as we know there's a lot of bullshit and misconceptions on the Internet 

Link to comment
Share on other sites

  • 4 weeks later...

Yeah, I've used it for personal stuff and also for Marketing copy for work. I showed it to my manager who couldn't believe the results for what he asked it. He said you're actually stupid if you're not using it. I hadn't told him I had though. 😆

Link to comment
Share on other sites

The problem with those detectors is that they even detect human-written content as AI. I wrote a 1,800 piece and originality.ai (meant to be the best detector on the market) marked it as 71% likely to be AI-written. So, then I'm having to reword content I already wrote just to prove to the person I wrote it for, that I am a human being. 

Huge can of worms. If the detectors cannot detect correctly, how can they say a student used AI to write an essay?

Link to comment
Share on other sites

35 minutes ago, JacknOry said:

The problem with those detectors is that they even detect human-written content as AI. I wrote a 1,800 piece and originality.ai (meant to be the best detector on the market) marked it as 71% likely to be AI-written. So, then I'm having to reword content I already wrote just to prove to the person I wrote it for, that I am a human being. 

Huge can of worms. If the detectors cannot detect correctly, how can they say a student used AI to write an essay?

Maybe you’re a robot and no one has told you…

Link to comment
Share on other sites

On 07/05/2023 at 22:36, DE. said:

Just spent some time trying out Heypi, a conversational AI chatbot. Honestly, it was scarily impressive. Not perfect but in terms of holding a realistic conversation, far superior to ChatGPT.

Meh. I tried Heypi last night and it refused to believe Rovers beat Burnley on April fools Day 2001. Kept telling me it didn't believe me but it would 'humour me' and play along.

 

Link to comment
Share on other sites

  • Backroom

Sounds like it was just trolling you tbh. Maybe it's a Burnley fan.

It was amusing for a while but I got bored of it pretty quickly. Mainly just using ChatGPT for coding queries now.

Edited by DE.
Link to comment
Share on other sites

I have been using Chat GPT to draft all sorts of notices for the last few months now. So long as you input the correct data to begin with, then the resulting draft is formatted brilliant

It has in the past even added in something I didn't ask. An obligation of a construction subcontractor is that they must 'mitigate any delays'. When I asked ChatGPT to draft a formal notice of delay under each clause it actually left me a blank section for "mitigation", with the words "As per our obligations under the subcontract we have mitigated delays by {insert text here]"

This is something the industry regularly misses when claiming for delay time / cost

I was mesmerised to be honest, it appeared to have an in-depth knowledge of the suite of contracts I was using

Link to comment
Share on other sites

Finding it a real time saver when using excel. I'd sat I'm competent excel but there's loads I don't know. I'd used to Google something previously and spend ages trawling a load of answers that weren't relevant. Now I can ask the question as if I was talking to someone and get the exact answer I want. It even writes out the code / formula that I can copy straight in. Even quite complex formulas are 90% there and all I have to do is alter the cell references. 

Technical accounting queries, ChatGPT gives me a solid starting point of an answer as well pointers in terms of sources, relevant legislation etc. 

We're quite a techbophobic organisation so it's giving me a real edge in terms of speed and quality of work. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.