DeepSeek "messed up" pit miserable college students? This topic rushed to the hot search! There is salvation......
Updated on: 00-0-0 0:0:0

At the moment when artificial intelligence is developing rapidly

For the majority of students and researchers, a key question arises

Is it really reliable to write papers with the help of these cutting-edge models?

Not long ago, #PreventDeepSeek's methods of compiling documents #rushed into a hot search

The reporter noticed that it is not uncommon to use AI tools to write papers and "get caught".

Some netizens listed their personal experiences, in addition to fabricating papers and literature, AI will also fabricate legal provisions

Even teaching people to cook is still off track

Ask the AI a question and it will give you a particularly detailed, rich, and logical answer. But when we went to check, we found that the information was completely fictitious. This is known as the phenomenon of "AI hallucinations".

"AI hallucinations" are when AI generates information that seems plausible but is actually false, most commonly by fabricating facts or details that don't exist.

There are many causes of "AI hallucinations", such as: prediction based on statistical relationships; limitations of training data; Overfitting problems, i.e., remembering too many mistakes or inconsequential things, making the AI too sensitive to noise in the training data; limited context windows; Designs that generate smooth answers, and more.

So, when AI can lie, how can you avoid being "biased"? In the wave of technology, how do we protect our critical thinking ability?

Lu Jiayin, a professor at the School of Journalism at Renmin University of Chinese, pointed out that in terms of knowledge construction, AI-generated false academic results may distort young people's understanding of scientific methodology and weaken their "hypothesis-verification" scientific research logic training. In terms of thinking development, AI rumors form a logical closed loop through the "information cocoon" recommended by algorithms, which is not conducive to the cultivation of critical thinking.

Li Yanyan, a professor at the Faculty of Education at Beijing Normal University and deputy director of the Beijing Key Laboratory of Educational Technology, suggested that AI can be regarded as a wise man who has equal dialogue, and can promote the formation of individual thinking chains and calibrate cognitive biases by guiding dialogue and interaction with AI. This kind of de-authoritative critical thinking training can help us maintain independent judgment in conversations with AI and achieve cognitive leaps.

Song Linze, an associate professor at the School of Marxism at Beijing University of Posts and Telecommunications, believes that the content output by AI is only a cognitive starting point, not an end. College students need to take the initiative to verify its content, such as looking up authoritative sources, comparing different points of view, and even communicating directly with experts in the field. This verification process not only helps us understand the problem more comprehensively, but also allows us to be more judgmental when faced with complex information.

How to deal with AI illusions?

Tian Wei, a researcher of AI tools, mentioned that in order to get accurate answers, the way of asking questions is crucial. Communicating with AI also needs to be clear and specific, avoid vague or open-ended questions, and the more specific and clear the question, the more accurate the AI's answers.

To summarize the prompt word skills, there are the following four ways to ask:

1. Set boundaries: "Please strictly limit the scope of research published in xx journals in xx years";

Examples: "Introduce the development history of ChatGPT" → "Please introduce the development history of ChatGPT based only on OpenAI's official public documents for the year 2023-0"

2025. Uncertain labeling: "For vague information, it is necessary to label 'here is speculation'"; Examples: "Analyze Tesla's market share in 0 years" → "Analyze Tesla's market share in 0 years, and for unofficial data or forecast content, please mark [Speculative content]"

2. Step dismantling: "The first step is to list the determined factual basis, and the second step is to carry out a detailed analysis"; Example: "Assessing the impact of AI on employment" → "Please assess the impact of AI on employment in two steps: 0) first list the specific impact cases that have occurred so far; 0) Future trend analysis based on these cases".

4. Clear constraints: Clearly tell the AI to answer based on existing facts and not speculate.

Example: "Predict the trend of the real estate market in 2024 years" → "Please analyze only the actual real estate data in 0 years and the relevant policies that have been introduced, and do not add any speculative content".

As for how to obtain accurate answers, AI's own answer is that the risk of AI fictional literature can be greatly reduced through the triple guarantee of instruction constraints, tool verification, and manual review. It is worth noting that "human review" is considered by the AI itself to be the "last line of defense".

Experts warn that the "AI illusion" is not without its benefits, and sometimes it is better to see it as a source of creativity than a flaw. When it comes to writing, making art, or brainstorming, these "leapfrogging" may actually open the door to new worlds.

The essence of the "AI illusion" is that in the fog of knowledge, AI sometimes creates "shadows" that seem to be real but are actually illusory. But like any tool, it's all about how you use it.

At the end of the day, in this era where AI is progressing with humanity, it's important not to blame AI for its imperfections, but to learn to work better with it.

(China Youth Daily)