Wow, the text generator that doesn’t actually understand what it’s “writing” is making mistakes? Who could have seen that coming?
I once asked one to write a basic 50-line Python program (just to flesh things out), and it made so many basic errors that any first-year CS student could catch. Nobody should trust LLMs with anything related to security, FFS.
I have one right now that looks at data and says “Hey, this is weird, here are related things that are different when this weird thing happened. Seems like that may be the cause.”
Which is pretty well within what they are good at, especially if you are doing the training yourself.
I wish we could say the students will figure it out, but I’ve had interns ask for help and then I’ve watched them try to solve problems by repeatedly asking ChatGPT. It’s the scariest thing - “Ok, let’s try to think about this problem for a moment before we - ok, you’re asking ChatGPT to think for a moment. FFS.”
Critical thinking is essentially learning to ask good questions and also caring enough to follow the threads you find.
For example, if mental health is to blame for school shootings then what is causing the mental health crisis and are we ensuring that everyone has affordable access to mental healthcare? Okay, we have a list of factors that adversely impact mental health, what can we do to address each one? Etc.
Critical thinking isn’t hard, it just takes time, effort.
I had a chat w/ my sibling about the future of various careers, and my argument was basically that I wouldn’t recommend CS to new students. There was a huge need for SW engineers a few years ago, so everyone and their dog seems to be jumping on the bandwagon, and the quality of the applicants I’ve had has been absolutely terrible. It used to be that you could land a decent SW job without having much skill (basically a pulse and a basic understanding of scripting), but I think that time has passed.
I absolutely think SW engineering is going to be a great career long-term, I just can’t encourage everyone to do it because the expectations for ability are going to go up as AI gets better. If you’re passionate about it, you’re going to ignore whatever I say anyway, and you’ll succeed. But if my recommendation changes your mind, then you probably aren’t passionate enough about it to succeed in a world where AI can write somewhat passable code and will keep getting (slowly) better.
I’m not worried at all about my job or anyone on my team, I’m worried for the next batch of CS grads who chatGPT’d their way through their degree. “Cs get degrees” isn’t going to land you a job anymore, passion about the subject matter will.
Altering the prompt will certainly give a different output, though. Ok, maybe “think about this problem for a moment” is a weird prompt; I see how it actually doesn’t make much sense.
However, including something along the lines of “think through the problem step-by-step” in the prompt really makes a difference, in my experience. The LLM will then, to a higher degree, include sections of “reasoning”, thereby arriving at an output that’s more correct or of higher quality.
This, to me, seems like a simple precursor to the way a model like the new o1 from OpenAI (partly) works; It “thinks” about the prompt behind the scenes, presenting only the resulting output and a hidden (by default) generated summary of the secret raw “thinking” to the user.
Of course, it’s unnecessary - maybe even stupid - to include nonsense or smalltalk in LLM prompts (unless it has proven to actually enhance the output you want), but since (some) LLMs happen to be lazy by design, telling them what to do (like reasoning) can definitely make a great difference.
I interviewed someone who used AI (CoPilot, I think), and while it somewhat worked, it gave the wrong implementation of a basic algorithm. We pointed out the mistake, the developer fixed it (we had to provide the basic algorithm, which was fine), and then they refactored and AI spat out the same mistake, which the developer again didn’t notice.
AI is fine if you know what you’re doing and can correct the mistakes it makes (i.e. use it as fancy code completion), but you really do need to know what you’re doing. I recommend new developers avoid AI like the plague until they can use it to cut out the mundane stuff instead of filling in their knowledge gaps. It’ll do a decent job at certain prompts (i.e. generate me a function/class that…), but you’re going to need to go through line-by-line and make sure it’s actually doing the right thing. I find writing code to be much faster than reading and correcting code so I don’t bother w/ AI, but YMMV.
An area where it’s probably ideal is finding stuff in documentation. Some projects are huge and their search sucks, so being able to say, “find the docs for a function in library X that does…” I know what I want, I just may not remember the name or the module, and I certainly don’t remember the argument order.
All the while it gets further and further from the requirements. So you open five more conversations, give them the same prompt, and try pick which one is least wrong.
All the while realising you did this to save time but at this point coding from scratch would have been faster.
Wow, the text generator that doesn’t actually understand what it’s “writing” is making mistakes? Who could have seen that coming?
I once asked one to write a basic 50-line Python program (just to flesh things out), and it made so many basic errors that any first-year CS student could catch. Nobody should trust LLMs with anything related to security, FFS.
ftfy
also any inputs are probably scrapped and used for training, and none of these people get GDPR
ftfy
Let’s hope it’s the bad outputs that are scrapped. <3
Eh, I’d say mostly.
I have one right now that looks at data and says “Hey, this is weird, here are related things that are different when this weird thing happened. Seems like that may be the cause.”
Which is pretty well within what they are good at, especially if you are doing the training yourself.
I wish we could say the students will figure it out, but I’ve had interns ask for help and then I’ve watched them try to solve problems by repeatedly asking ChatGPT. It’s the scariest thing - “Ok, let’s try to think about this problem for a moment before we - ok, you’re asking ChatGPT to think for a moment. FFS.”
Critical thinking is not being taught anymore.
Has critical thinking ever been taught? Feel like it’s just something you have or you don’t.
Critical thinking is essentially learning to ask good questions and also caring enough to follow the threads you find.
For example, if mental health is to blame for school shootings then what is causing the mental health crisis and are we ensuring that everyone has affordable access to mental healthcare? Okay, we have a list of factors that adversely impact mental health, what can we do to address each one? Etc.
Critical thinking isn’t hard, it just takes time, effort.
I had a chat w/ my sibling about the future of various careers, and my argument was basically that I wouldn’t recommend CS to new students. There was a huge need for SW engineers a few years ago, so everyone and their dog seems to be jumping on the bandwagon, and the quality of the applicants I’ve had has been absolutely terrible. It used to be that you could land a decent SW job without having much skill (basically a pulse and a basic understanding of scripting), but I think that time has passed.
I absolutely think SW engineering is going to be a great career long-term, I just can’t encourage everyone to do it because the expectations for ability are going to go up as AI gets better. If you’re passionate about it, you’re going to ignore whatever I say anyway, and you’ll succeed. But if my recommendation changes your mind, then you probably aren’t passionate enough about it to succeed in a world where AI can write somewhat passable code and will keep getting (slowly) better.
I’m not worried at all about my job or anyone on my team, I’m worried for the next batch of CS grads who chatGPT’d their way through their degree. “Cs get degrees” isn’t going to land you a job anymore, passion about the subject matter will.
Altering the prompt will certainly give a different output, though. Ok, maybe “think about this problem for a moment” is a weird prompt; I see how it actually doesn’t make much sense.
However, including something along the lines of “think through the problem step-by-step” in the prompt really makes a difference, in my experience. The LLM will then, to a higher degree, include sections of “reasoning”, thereby arriving at an output that’s more correct or of higher quality.
This, to me, seems like a simple precursor to the way a model like the new o1 from OpenAI (partly) works; It “thinks” about the prompt behind the scenes, presenting only the resulting output and a hidden (by default) generated summary of the secret raw “thinking” to the user.
Of course, it’s unnecessary - maybe even stupid - to include nonsense or smalltalk in LLM prompts (unless it has proven to actually enhance the output you want), but since (some) LLMs happen to be lazy by design, telling them what to do (like reasoning) can definitely make a great difference.
My experience with ChatGPT goes like this:
I interviewed someone who used AI (CoPilot, I think), and while it somewhat worked, it gave the wrong implementation of a basic algorithm. We pointed out the mistake, the developer fixed it (we had to provide the basic algorithm, which was fine), and then they refactored and AI spat out the same mistake, which the developer again didn’t notice.
AI is fine if you know what you’re doing and can correct the mistakes it makes (i.e. use it as fancy code completion), but you really do need to know what you’re doing. I recommend new developers avoid AI like the plague until they can use it to cut out the mundane stuff instead of filling in their knowledge gaps. It’ll do a decent job at certain prompts (i.e. generate me a function/class that…), but you’re going to need to go through line-by-line and make sure it’s actually doing the right thing. I find writing code to be much faster than reading and correcting code so I don’t bother w/ AI, but YMMV.
An area where it’s probably ideal is finding stuff in documentation. Some projects are huge and their search sucks, so being able to say, “find the docs for a function in library X that does…” I know what I want, I just may not remember the name or the module, and I certainly don’t remember the argument order.
All the while it gets further and further from the requirements. So you open five more conversations, give them the same prompt, and try pick which one is least wrong.
All the while realising you did this to save time but at this point coding from scratch would have been faster.