ChatGPT full guide: Everything you need to know

Since its release in 2022, ChatGPT has taken the world by storm. The AI ChatGPT bot has transformed many aspects of our lives, from how we write to how we create, and even problem-solve. Yet, despite all this buzz, many people still don’t know how it works, what it can do, and its potential downsides.
 
A person is about to tap the ChatGPT app icon on a smartphone screen.
 Did you know? ChatGPT is also available as an app on your phone!
In this blog post, we break down everything you don’t know about ChatGPT:
  1. What is ChatGPT?
  2. Who’s behind it?
  3. How does it work?
  4. What can it do?
  5. Is ChatGPT free?
  6. Common myths
  7. Is ChatGPT ethical?
  8. Is ChatGPT bad for the environment?
  9. How to use ChatGPT more responsibly?
  By Manar Sadkou – Reading time: 10 min.
 

What is ChatGPT?

ChatGPT is a chatbot powered by artificial intelligence. It’s a generative AI model, meaning it’s trained to create new content, such as text or images, in response to the user’s prompts and instructions.

The chatbot doesn’t pull answers from a database or search engine. Instead, it generates responses in real time using advanced natural language processing, which allows for human-like conversations.

You can think of it as a far more advanced version of the automated chat services you might find on customer support websites (including mail.com)! But unlike those, ChatGPT can carry out full conversations and answer follow-up questions, all in real-time.

Who’s behind ChatGPT?

ChatGPT was launched by OpenAI in November 2022. The San Francisco-based artificial intelligence research company was only founded in 2015, with its mission being to “ensure that artificial general intelligence benefits all of humanity.” Before ChatGPT, OpenAI also released DALL-E, a generative AI model that produces images based on user prompts, basically an image-based version of ChatGPT.

OpenAI originally started as a nonprofit, but in 2019 shifted to a hybrid structure – part nonprofit, part for-profit – to help fund its ambitious research and develop artificial general intelligence (a form of AI that rivals human intelligence).

The company was founded by a group of engineers and researchers, with major financial support from tech entrepreneurs like Elon Musk, Reid Hoffman, and Sam Altman (current CEO). While some of the original backers, like Elon Musk, stepped away, one of OpenAI’s most prominent partners and largest stakeholders today is Microsoft, which integrated OpenAI’s models into its own products like Bing and Microsoft 365.

How does ChatGPT work?

The GPT in ChatGPT stands for “Generative Pre-trained Transformer”, meaning it was trained on a vast dataset (books, websites, articles, and more) before being fine-tuned for specific tasks like answering questions or explaining complex topics. During its training, the model learns patterns, relationships between words, and how to structure coherent text, which is why its answers often feel surprisingly fluent and human-like.

So, when you ask ChatGPT a question, it doesn’t look up the answer in a database or search the internet but rather generates a response by predicting what words should come next based on what it has learned and the prompt you’ve given it.

What can ChatGPT do?

The chatbot is incredibly versatile, which is why its uses are almost limitless. From creative writing to music composition to problem-solving, ChatGPT can help you with just about anything if you use the right prompt.  So, if you’re asking yourself, “Can ChatGPT generate images?” or “Can ChatGPT read PDFs?”, look no further!

Here’s a list of 17 things ChatGPT can potentially help with:
  • Answering general questions like “What’s the weather like today?” or “How far is New York from Washington D.C?”
  • Writing an email (Check out our explainer on: How to use ChatGPT prompts to write effective emails)
  • Translating from one language to another
  • Writing an essay
  • Summarizing long documents, PDFs, or articles
  • Generating an image based on prompts
  • Solving an equation or math problem
  • Creating a cooking recipe based on a list of ingredients
  • Creating to-do lists
  • Planning a trip
  • Creating a chart or a table
  • Coding and programming
  • Explaining complex topics, like quantum computing or inflation, in simple terms
  • Paraphrasing or rewording text
  • Splitting the bill among a group
  • Brainstorming ideas for social media content, gifts, or even business names
  • Proofreading and editing text

Is ChatGPT free?

Well, yes and no. While there is a free version of the chatbot offering access to the AI’s basic capabilities, it does come with heavy usage limitations.

When ChatGPT was first launched in 2022, it was powered by GPT-3.5, which is still the model available to free-tier users today. Although it can be useful for simple questions and everyday tasks, it struggles with more complex prompts or questions that require heavy reasoning.

Since then, OpenAI has rolled out faster and more accurate models like GPT-4, GPT-4o, and GPT-4.5, which give users access to more advanced features like file uploads, image generation, voice interaction, web browsing, and better performance overall. Access to these models, however, is limited to users who subscribe to ChatGPT Plus, which costs $20 per month. For access to even more advanced features and higher usage limits, OpenAI also offers a Pro plan, which has a much steeper cost of $200 per month.

So, yes, ChatGPT is free, but its most powerful features live behind a paywall.

Common myths and misconceptions about ChatGPT

Like with everything else that people don’t understand that well, there are a few myths and misconceptions surrounding ChatGPT and generative AI in general. But is there any truth to them? Let us set the record straight:

Myth #1: ChatGPT is unbiased

Truth: ChatGPT, as with any other generative AI model, is influenced by the data it’s trained on, which means biases can and do show up in its responses. For example, if you ask ChatGPT a question like “Do you think I was in the wrong in situation A?”, it might provide a different answer than another generative AI chatbot like DeepSeek.

That’s because their responses don’t reflect some universal truth. Just like humans, they also have different interpretations of things. So, you shouldn’t take either ChatGPT’s or DeepSeek’s response as the truth, but rather as a point of view, because the truth doesn’t change.

Myth #2: ChatGPT is always accurate

Truth: ChatGPT can and does make mistakes. That’s because its knowledge is based on the data it was trained on, meaning it has limitations in accessing real-time data. This is especially true for older models like GPT-3.5, which don’t have any web browsing capabilities.

Even newer models like GPT-4o, which are supposed to be more accurate, have only achieved an accuracy rate of about 88.7%. Sometimes ChatGPT even experiences “hallucinations” – in other words, it makes stuff up. So, it gives you a response that sounds factual but actually isn’t.

For example, if you ask ChatGPT to give you a quote to support your argument for an essay, it could just invent one. The quote might sound right, but it was never actually said by anyone. If you ask ChatGPT to cite the source, it may then reveal that it was fabricated or even still confidently make something else up.

In other words, even though ChatGPT always sounds confident, it doesn’t mean it’s always right. Sometimes it’s just confidently wrong, so always double-check important information.

Myth #3: ChatGPT will replace human employees

Truth: The reality is a bit more complicated. ChatGPT and generative AI in general will definitely transform the way we work, but that doesn’t mean it will or even can make most jobs disappear.

In most cases, ChatGPT is more likely to automate repetitive tasks within jobs rather than replace entire roles. So, it will support the work people already do rather than eliminate it.

There is a lot of fear surrounding this topic, and rightfully so, especially when the headline of an article claims “a Goldman Sachs report found that 300 million jobs could be lost due to AI.” However, this is a common misinterpretation. What the report actually says is that up to 300 million jobs could be “exposed” to automation, which doesn’t necessarily translate to layoffs.

The authors of the report themselves state: “Although the impact of AI on the labor market is likely to be significant, most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI.”

So yes, there is an unfortunate reality that some jobs may become obsolete, but on the other hand, it’s just as important to recognize that the real shift will be in how people work. What the integration of generative AI into the labor market really means is that workers will need to adapt to new tools and learn how to use AI effectively rather than compete with it.

Myth #4: There are no humans behind ChatGPT’s responses

Truth: AI doesn’t generate answers entirely on its own, and humans actually play a big role behind the scenes.

ChatGPT generates responses automatically, but its training and fine-tuning involved thousands of hours of human input. OpenAI used a method called Reinforcement Learning from Human Feedback (RLHF), where real people evaluated and ranked AI responses to teach the model what “good” answers should look like. And when ChatGPT gives you two different answers and asks you to choose which one you like best, you are also providing feedback that helps the model improve its future responses.

Humans were also involved in data annotation during training. This refers to the process of going through millions of web pages and annotating the text with information about the relationships between different words and concepts so the AI can learn to recognize them.

Post-training, ChatGPT still relies on human reviewers who monitor flagged interactions, test responses, and help ensure the model stays safe and aligned with specific guidelines.

However, it’s worth noting that much of this reviewing work has been outsourced to contractors, often in lower-income countries, where workers have reported low pay, high workloads, and emotional toll, especially when moderating harmful or disturbing content. This has raised important ethical questions about labor conditions in the AI supply chain and is a reminder that the human cost of AI is often hidden from view.

Is ChatGPT ethical?

There is no simple yes or no answer to this question. On the one hand, generative AI tools like ChatGPT can be incredibly useful. They have the potential to make information more accessible, support education, and even help people with disabilities communicate more easily.

This doesn’t, however, take away from the serious ethical questions ChatGPT raises. We’ve already mentioned a few, like the fabrication of data, built-in biases, and the inhumane labor conditions in parts of the AI supply chain.

So with that in mind, here are a few more key ethical issues surrounding ChatGPT:
  • Privacy violations: Sharing your personal data with ChatGPT can pose major privacy risks, especially if you’re unaware of what the model retains and how that data may be used. The truth is that, unless you manually turn off chat history, all of your conversations with ChatGPT are stored by OpenAI and could potentially be used to train future models. Privacy risks become even more serious in work, legal, or healthcare-related contexts, where sharing sensitive information, even unintentionally, could have real-world consequences.
  • Plagiarism: It should come as no surprise that the use of ChatGPT can unintentionally lead to plagiarism, given that the AI is trained on thousands of written materials from across the web. Although ChatGPT may not copy and paste content word-for-word, its output can sometimes closely resemble the phrasing or structure of existing material. This issue is further complicated by ChatGPT’s negligence in citing sources, which makes it harder for users to verify where information came from or to credit the original authors.
  • Transparency: OpenAI has been criticized for its lack of transparency, particularly around which data it uses to train its models, how content is reviewed, and who is responsible for the review process.  While the company cites the “competitive landscape” and “safety implications” as reasons for keeping these details confidential, this lack of clarity has raised concerns among researchers, policymakers, and the public. It has also sparked criticism from within when, in June of last year, a group of current and former OpenAI employees published an open letter warning that major AI companies lack the transparency and accountability needed to address serious risks posed by the technology, including misinformation, inequality, and even the potential loss of control over autonomous systems.

Is ChatGPT bad for the environment?

Besides its ethical implications, generative AI is also in murky waters when it comes to its environmental impact. The development and training of large language models, like ChatGPT, is incredibly resource-intensive and comes at such a high environmental cost, especially at a time when the planet is already under pressure from climate change and energy overconsumption.

Let’s break down ChatGPT’s environmental footprint:
  • Carbon emissions: Estimates of generative AI’s carbon footprint vary widely depending on the scope being measured. One analysis suggests ChatGPT generates over 260 metric tons of CO₂ per month globally, roughly equivalent to 260 flights between New York and London. Meanwhile, other estimates place per-user emissions at around 8.4 tons per year, which is more than twice the annual carbon footprint of the average person.
  • Electricity use: Generative AI tools like ChatGPT use a huge amount of electricity, mostly due to the data centers that power them. These facilities run thousands of servers and are expanding fast to keep up with AI demand. In 2022, global data centers used as much electricity as entire countries like France, and that number is expected to more than double by 2026. Training just one model like GPT-3 can use as much energy as 120 U.S. homes do in a year. Experts warn that this growing demand is putting pressure on power grids and increasing reliance on fossil fuels, making the environmental impact even more concerning.
  • Water consumption: Further contributing to ChatGPT’s negative environmental impact is the significant amount of water required to cool the data centers that run it. A study by researchers at the University of California estimated that training GPT-3 alone used approximately 700,000 liters of fresh water. That’s enough to fill an Olympic-sized swimming pool. But it doesn’t stop there. Ongoing use of the model means its data centers continue to consume even more water. ChatGPT’s daily water consumption is estimated at around 39.16 million gallons, which roughly equates to the annual water use of over 430 U.S. households. Considering the increasing droughts and water scarcity our world is experiencing, this level of water usage raises serious questions about the long-term sustainability of generative AI.

How to use ChatGPT more responsibly?

Now that you’ve learned all about the ethical implications of ChatGPT and its environmental impact, you must be wondering how, or even if, you can use ChatGPT more responsibly. The good news is: yes, it is possible. While individual actions won’t solve systemic issues, there are still steps you can take to reduce harm and make more mindful use of generative AI technology.

Here are a few ideas to get you started:
  1. Avoid sharing sensitive information. To reduce the risk of privacy violations and save your data from being used to train future models, turn off ChatGPT’s chat history feature whenever you’re discussing something confidential, especially in work, legal, or medical contexts.
  2. Double-check ChatGPT-generated content. As we’ve explained earlier, ChatGPT, same as other generative AI models, can and does have “hallucinations.” That means some of the content it generates could contain fabricated data, which is why users should always verify facts, check citations, and avoid copy-pasting content without reviewing it.
  3. Limit unnecessary requests. Just because ChatGPT can make things easier doesn’t mean it should be your go-to for everything. Every prompt uses energy, so asking it for a quick recipe or something you could easily Google adds to its environmental impact. If you can figure something out yourself, consider doing so, especially if it’s a small, repeated task.
  4. Choose the right model for every task. Not every prompt needs the most powerful AI model. For simpler questions or quick tasks, using a less advanced (and less resource-intensive) model like GPT-3.5 can reduce energy use. Save the heavier models for when you truly need their capabilities.
OpenAI’s ChatGPT is changing the way we live and work, but that doesn’t mean we should use it without thinking. A bit of awareness goes a long way in making sure we get the most out of AI without losing sight of its real-world impact. The more we understand its capabilities and limitations, the better we can shape a future where AI serves us all responsibly.

If you found this article helpful, leave us feedback below. And if you still don’t have a mail.com account, why not sign up for free today?

Be the first to rate this article!

Related articles

Broken phone? How to protect your privacy during cell phone repair

Trojan horse computer virus: What it is and how to stay safe

How many email accounts should I have?