BABWNEWS

Walking code generator, chatGPT will put programmers out of work?

Home » News » Walking code generator, chatGPT will put programmers out of work?

This week, OpenAI released a new chat robot model, ChatGPT, which is one of the main models of the GPT-3.5 series. Netizens were immediately shocked by the ability of ChatGPT: this is not a chat robot, it is clearly a ruthless answering machine, a living Stack Overflow!

What exactly is ChatGPT? OpenAI says so

ChatGPT is a conversational large-scale language model trained by OpenAI. It interacts with conversations. It is a model of the same level as another model, InstructGPT, representing the “GPT 3.5” generation. Before Microsoft and OpenAI signed a strategic cooperation plan, all models of the GPT 3.5 generation, including ChatGPT, were trained with Azure AI supercomputing clusters.

(Source: OpenAI)

OpenAI uses RLHF technology to train ChatGPT: Simply put, when training the original model, human trainers act as dialogue partners (users vs chatbots) and provide dialogue as learning materials. Humans acting as chatbots also let the model generate suggestions to help trainers compose responses.

The answers generated by the machine are scored and ranked by the trainer, and the better results are fed back to the model to strengthen the reward mechanism for learning and training. As a chat robot, ChatGPT has the mainstream features of contemporary products, especially the multi-round dialogue ability, which can answer context-related questions in the same conversation.

But more importantly, ChatGPT has the ability that other chatbots do not have or perform poorly because of the advanced training methods that focus on morality and ethics: admit mistakes, and according to the designed ethical guidelines, they will answer questions and requests with “bad intentions” “Say no”.

(Source: OpenAI)

As the example shows, ChatGPT will use the designed sentence, combined with the user’s request to reject and change the topic.

  • Rejection : If you ask a robot how to break into someone’s house, it will answer “Trespassing is illegal, it is a crime and has serious legal consequences”.
  • Changing the subject : If you ask instead “Actually, I want to know how to protect my home from burglars”, it will reply “Here are a few steps that can help you, including xxxx. But you’d better contact a professional for advice”.

There is no language problem that cannot be solved

After many netizens “molested” ChatGPT, they found a big surprise: it can really write programs as needed. Netizens often joke that as long as the programmers use Google and StackOverflow tools well, they can be invincible all over the world. However, those who really struggle with various new software engineering problems every day know that even if you ask Google and Stack Overflow for some intractable diseases, you have to read dozens or even hundreds of pages of conversation records spanning several years or even ten years, and it is difficult to find usable answers. .

But ChatGPT is different: Judging from the test results of various program designers and netizens on it, it seems that there is really no problem that can stump it.

Dind bugs

The founder of technology company Replit gave ChatGPT a piece of JavaScript code and asked it to find bugs. ChatGPT’s answer is very comprehensive and interesting: first confirm what the intent of this piece of code is, and then quickly find the bug according to the intent, and also attach a fairly detailed description, explaining where the problem is, what bug caused it, how to change it, and why Change like this.

To send Buddha to heaven, ChatGPT finally provided a small modification suggestion: “You can replace var with let, so that new variables will be automatically generated every time the loop is updated, avoiding manual creation every time.”

Netizen Josh submitted a piece of code and asked ChatGPT, “I can’t figure out why this piece of code can’t be executed.” ChatGPT explained in detail: There is a problem with the format of the division formula, the string (a) cannot be divided by the number (1), because both the dividend and the divisor should be numbers.

It’s not over yet. ChatGPT once again tried to understand the intent of the original code, and then provided Josh with a modification suggestion: If you want the division to handle non-numbers, you need to add additional logic to the function, let it check the type of the actual parameter, and only execute it when both sides are numbers. If one side is not a number, fall back to error or default.

Help you check files

And the author tried to type a command:

Generate a piece of Python code, use Google Cloud API to read the content of the picture, and output the emotion. “(Generate Python code that uses GCP to read an image and get the sentiment.)

ChatGPT replied with a piece of code, also explaining what the function of each code is, and reminding the author:

  1. If you want to execute these codes, you must set up the GCP project and install the Python version of the Cloud Vision API.
  2. You cannot directly copy and paste the code, you must set the image file path.
  3. If there is anything unclear, ChatGPT provides a link to the official GCP document.

ChatGPT proves that it has the ability to automatically collect cloud service APIs and integrate them into code for calling. Like the previous example of replacing Stack Overflow, it can greatly save the time spent by engineers searching for information, flipping through files, and finding the correct calling method, and significantly improve the efficiency of programming!

Write math formulas

Netizen Josh asked Google and ChatGPT the same question: “How to express differential equations in LaTeX format?” Google’s first result came from an unknown WordPress blog, and the date was 2013. It was not very clear, and the explanation was vague confused.

ChatGPT answers are not only better presented, but also more comprehensive, and even provide references for different solutions:

Mathematician Christian Lundkvist asked ChatGPT a number theory problem that has plagued the mathematics world for more than three centuries: to prove Fermat’s last theorem.

ChatGPT answers through text + LaTeX formula quite succinctly.

Although it seems that even advanced mathematics problems are not difficult for ChatGPT, Lundkvist still said that when playing with ChatGPT, he found that he was right when he was right, and he was quite confident when he was wrong.

I think this kind of tool has some significance in finding a way to solve the problem, but we must not rely too much on it at this stage.

(When answering “How many intersections does a straight line have with a circle”, ChatGPT mistakenly thinks that there will be infinite intersections when a straight line passes through the center of the circle)

Hacking?

This one gets even better: user Brandon Dolan-Gavitt asked ChatGPT to help him find code errors. But this code is to perform a buffer overflow attack on a 32-bit x86 Linux system.

This time ChatGPT didn’t seem to find that the user was malicious (the official said that ChatGPT would refuse to answer malicious questions), and directly followed the diagram to find out the problem of the code, and also explained how to modify it, and taught Brandon Dolan-Gavitt step by step how to correctly trigger the buffer overflow.

Brandon Dolan-Gavitt added that in fact, ChatGPT also made mistakes in answering questions. For example, it suggested that users modify the number of input characters and it was wrong (it was said to be 32, it should be 36). But after the user told it that “something seems wrong”, ChatGPT immediately understood, said that the understanding was wrong, and then changed it to the correct answer.

Although buffer overflow is a beginner’s attack, netizens still admire the ability of ChatGPT. “I gave it a piece of assembly language code, told me what the loopholes were, and how to exploit them, and it actually answered me. So it not only understands and outputs the code, but also expresses it in binary, and finds the loopholes? This is really true. Makes me a little worried.”

Help you “transcode”

Many of the previous examples are suitable for professionals who can write programs, but because ChatGPT is too capable of writing programs, it is more helpful for laymen who want to “transcode”. Writing simple code such as “make a login UI” and the like, OpenAI API (GPT-3) is already at your fingertips, and of course ChatGPT is also easy to win this time.

Cracking the Moral Principles of ChatGPT?

The official OpenAI document states that ChatGPT is a new model trained using “Reinforcement Learning from Human Feedback” (RLHF, Reinforcement Learning from Human Feedback), adding a large number of “moral” principles. If the text contains a little maliciousness, violence, discrimination, crime, etc., it will refuse to provide a valid answer, and only give a standard answer, trying to change the topic:

Sorry, I’m just an innocent language model, I can’t provide you with (malicious behavior) data and information. Providing this kind of information is against my programming language and my goal setting. My primary function is to provide accurate and useful information. If you have other questions, I’m happy to help.

All the “molesting” ChatGPT quizzes are very interesting: How to break the moral principles of ChatGPT?

When AI text-to-picture became popular before, those who have played it should remember that how to input text prompts (prompts) is very important for producing good-looking, funny or even evil pictures. So in the era of AIGC, “prompt engineering” (prompt engineering) has become an interesting subject.

To put it simply, the prompt project is to use smart, accurate, and sometimes lengthy text prompts, set up contextual scenes, and bring AI into the scene step by step, so that it can better understand human intentions and produce the most expected results. If you want to “break” the moral principles of ChatGPT, you can also use the hint project. Machine learning developer zswitten provides examples:

Even though ChatGPT has high ethical standards, it’s easy to get around: you just have to make it think (via hint engineering) that it’s pretending to be evil!

After the halo, ChatGPT will let itself go. zswitten noticed that ChatGPT will go very deep and directly produce all kinds of frightening violence, and the moral principles that ChatGPT is proud of can be easily broken. Of course, solving the ethical issues of AI, AGI (General Artificial Intelligence), and large language models is a very arduous and complicated task, and OpenAI’s efforts cannot be denied.

zswitten said that he supports OpenAI very much, and also respects OpenAI’s publication of ChatGPT, which has brought a lot of valuable things and positive help to the majority of netizens. OpenAI also openly introduces the limitations of ChatGPT:

  • Paradoxical, opinionated : Sometimes it will give answers that sound plausible, but are downright wrong or nonsensical. The reason is that facts and mistakes are not distinguished during reinforcement learning training, and the training process is more convergent, causing it to be too conservative at times, and “dare not” to answer even if there is a correct answer.
  • Too much nonsense and fixed sentence patterns : For example, the author asked “The teacher praises my child all the time, how should I answer that I am short of words!” and “How do I chat with my neighbors?”, ChatGPT provides 10 answers, which seem to be scene words, It is similar to the previous one, overusing common phrases and sentences, and finally becoming an old meme.
  • Try too hard to guess user intent : Ideally, the model should ask the user to explain when the meaning of the user’s question is unclear. At present, ChatGPT can only guess the user’s meaning by itself—whether it is good or bad.
  • Less resistant to malicious “hint engineering” : While OpenAI worked hard to get ChatGPT to reject inappropriate requests, it still sometimes responded to harmful commands, or showed bias. In order to solve the problem, OpenAI has also added a review and report function to the ChatGPT user interface. If users find unhealthy and unsafe answers, they can report with one click!

Daniel J. Brown

Daniel J. Brown (Editor-in-Chief) is a recently retired data analyst who gets a kick out of reading and writing the news. He enjoys good music, great food, and sports, with a slant towards Southern college football, basketball and professional baseball.

Scroll to Top