Q&As: ChatGPT 101/102 Training Sessions

13 mins read

Content type:

  • FAQs

This page is a compilation all the questions submitted through Pigeonhole during the ChatGPT 101/102 onboarding sessions held on 17 September. This collection includes responses to every question, including those that were addressed live by the OpenAI speaker. Use this resource to revisit key insights, clarify common queries, and explore additional information shared during the sessions.




Features

Can we enable API Keys so that we can benefit from Agentic AI workflows? Custom GPTs fall short on predictability and state-management.

The AI Team is building a framework to enable API Key Retrieval via OpenAI’s API Platform. We will be sharing updates on this within the next 1 month.

Can Codex be enabled, and how can we use it?

The AI Team is building a framework to enable the Codex feature on ChatGPT. We will be sharing updates on this within the next 1 month. Meanwhile, you may visit this page published by OpenAI to learn more about the Codex feature and how it can be applied to your work.

What happens if I missed the 8 August deadline?

If you missed the deadline, you may still proceed to apply for a license via iProfile, after discussing with your Manager about your need for a license. Requests are subject to approval and availability.

Can we please have some limited monthly quota based access to GPT-Pro as part of our subscription? Instead of raising a separate elevated iProfile request?

We understand that there is strong interest amongst staff in applying for access to the GPT-5 Pro model. Currently, access is only enabled through approval of requests raised in iProfile. The AI team will be monitoring the usage rates and patterns, as well as user feedback, to determine the best way forward for the enabling of such features. If you will like to request access to the GPT-5 Pro model, you may follow the steps published in this guide to apply for it through iProfile.

Will all users have access to the Deep Research function?

If you have a valid business use case, please proceed to apply for access for Deep Research feature through iProfile. The step by step instructions for doing so can be found in this link. However, as Deep Research is a feature heavy on credits consumption, the AI team will monitor the consumption rates and access may be restricted in the event that the organisation’s credits pool runs low.

Usage and Best Practices

What are the best practices to design prompts, including system prompt and user prompt?

You may refer to the ChatGPT 101 Training Recording on the LMS (Learning Management System) for some of the best practices shared around effective prompt writing. Additionally, feel free to reference this article for some suggested approaches to prompt writing. Alternatively, you can prompt ChatGPT itself for some recommendations or advice around prompt writing!

How best to use Chatgpt as a personal secretary (administrative matters, sending of reminders, setting of appointments, follow-up on request, manage tasks and resource planning, project planning, tracking of expenditure etc.)

Answered by OpenAI Trainer: One way to use ChatGPT as a personal assistant is via Agent mode, where it can trigger actions across different domains. For more information on Agent mode, please refer to this article, or module 6 in the ChatGPT 102 Recording on the LMS (Learning Management System). A simpler way to consider is to run a query in chat (e.g, provide latest updates on topic of choice), and ask ChatGPT to run this query at a regular interval (e.g., every Monday morning at 9am). This will trigger the query into a ‘task’ that ChatGPT will perform on a regular basis. Another example of a query that can be used this way is asking ChatGPT to send reminders on the meetings you have scheduled for the day, and scheduling this as a ‘task’ that ChatGPT performs once or twice a day.

I want to create an agentic service to help with my work, for others to submit orders, make queries, track progress, retrieve history, and provide suggestions.

Building an agentic workflow may require additional tools other than ChatGPT. You can reach out to AI Team to have a discussion to better understand your use case.

How to create a custom GPT?

To find out more about creating Custom GPTs and best practices for doing so, you may refer to modules 3 and 4 from the ChatGPT 102 Recording on LMS (Learning Management System). Alternatively, you may refer to this 2 articles:

1) Introduction to Custom GPTs

2) Writing effective instructions for Custom GPTs

How best can we reduce and avoid hallucinations especially when it comes to fact-based data where accuracy is critical?

Answered by OpenAI trainer: The best way to reduce hallucinations is to include instructions within the prompt. Simple instructions like ‘double check your output’, ‘verify facts’, ‘provide citations for resources used’ can help to minimize hallucinations. For output involving external references, verifying sources through the citations provided will help as well.

If I already have a custom GPT built on 4.0, would upgrading it to 5.0 make a significant difference in results?

Answered by OpenAI trainer: The overarching principle is that GPT-5 model is generally better at performance areas where 4o model was considered to be good at. Any user should have a pathway to using GPT-5 model as the 4o model may be sunset as soon as in less than a month. If any user encounters challenges or friction in the migration pathway to GPT-5 model, please inform the AI team so that appropriate feedback can be passed on to the OpenAI team

Can I perform RAG with ChatGPT Enterprise?

RAG (Retrieval Augmented Generation) is possible when a connection is made from ChatGPT to data sources (e.g., through the OpenAI API or connectors). Then, the LLM model can retrieve information dynamically from an external knowledge source (like a database or knowledge base) and proceed to generate a response.

What do terms like ‘system prompt’ and ‘markdown’ mean?

In general, System Prompt refers to the instruction that sets ChatGPT’s overall behaviour and guides its response. Mark down is a specific type of language or format that is better structured for ChatGPT to understand. For more information on best practices for prompt writing and engineering, you may refer to module 12 of the ChatGPT 101 Recording on LMS (Learning Management System) or the article linked here.

If ChatGPT can generate prompts itself, how does prompt engineering stay relevant—does it shift toward workflow design and strategy instead?

Answered by OpenAI trainer: It is still important to have the fundamental skill in being able to tweak a prompt to meet our goals and having the judgement to understand if a prompt is helping you meet your goals. We might not need to write it from scratch, but we need to be able to know how to modify a prompt if the output we’re getting is not satisfactory. In other words, it is still important to be able to prompt to write a prompt.

Are non-English languages on par with English for both ChatGPT input and output?

ChatGPT supports many non-English languages and generally performs well for both input and output. However, its strongest performance is still in English, since most of its training data is in English. For widely used languages such as Chinese, Spanish, or French, the quality is usually very good, but for less common languages or highly specialised contexts, responses may be less fluent or accurate.

Can we use ChatGPT to do copy scripts translation and how accurate will the translation be as compared to say Google Translate?

Yes, ChatGPT can be used to translate copy and scripts. It often goes beyond direct word-for-word translation by taking into account tone, style, and context, which can make it more suitable for marketing or creative work. In comparison, tools like Google Translate are optimised for literal, fast translations and may be better for short, straightforward text. While ChatGPT’s translations are generally accurate for major languages, it’s still best practice to have a human reviewer check important content before sharing or publishing.

Will ChatGPT be able to read websites for information /context , so as to incorporate its answers to our questions?

Yes, if the ‘Web Search’ tool is enabled, ChatGPT can retrieve information from the internet and use it to provide more up-to-date and contextual answers. By default, browsing is not always turned on. Without browsing, ChatGPT answers based only on its trained knowledge and any files or context you provide.

What is the key purpose of the “projects” folders feature? We can just use one GPT chat for multiple projects.

Projects are designed to give you a dedicated workspace where you can group together chats, files, instructions, and collaborators around a specific initiative. Unlike a single chat, which can quickly get cluttered or lose context, a Project keeps all related material organised and consistent — making it easier to revisit, share, and build on over time. You can use one long chat for multiple projects, but using Projects ensures clearer separation, better knowledge management, and avoids mixing up contexts across different workflows.

Will ChatGPT eventually be able to transcribe audio directly from an audio file, just like how we directly upload images for ChatGPT to analyse?

Answered by OpenAI trainer: As of now, ChatGPT cannot transcribe audio. If this functionality is required, users can potentially build on top of OpenAI’s API solution, which is a little bit more advanced, but there are audio transcription APIs available. Otherwise, once there is a transcript available for the audio file of interest, users can then use that in ChatGPT. So for example, if a meeting transcript is available from whatever meeting tool you use, you can put that in ChatGPT and then summarize that. But it does not currently have capability to consume and transcribed audio file within ChatGPT itself.

Is ChatGPT able to create interact-able prototype like Figma or create user interface with interaction and transition effect?

Answered by OpenAI trainer: ChatGPT itself doesn’t replace design tools like Figma, but with Canvas it can generate working code for interactive web pages, dashboards, or simple UI prototypes. The code (often in frameworks like React) can be run, previewed, and edited directly in Canvas, or exported into your own editor for further development. In short, you can use ChatGPT to prototype interactive interfaces in code, but not full drag-and-drop design mock-ups like in Figma.

For ChatGPT Use Cases for Work, if there is not a lot of information regarding certain job roles on the Internet, how will it behave?

If there is limited online information about a specific job role, the Custom GPT may generate more general suggestions instead of highly role-specific examples. It will still try to draw from related tasks, skills, and common workplace scenarios, but the results may be broader or less detailed. You can always refine it by adding your own context or uploading documents, which helps ChatGPT tailor answers more closely to your team’s needs.

Any example of Action in Custom GPTs?

An Action is when a Custom GPT connects to an external tool or API to do something beyond text generation. For example, you could create a Custom GPT that: Calls a calendar API to check your availability and schedule a meeting. Please see further examples as follows:

1) Connects to a knowledge base or database to pull the latest sales figures.

2) Triggers a task in Jira, Asana, or Trello when you ask it to “create a new ticket.”

Can ChatGPT Agent be shared like Custom GPT?

No, it is possible to share all the ingredients of a ChatGPT agent like the prompt, but the actual agent itself is private to the user.

Can ChatGPT agent connect to the JIRA and get the list of the tickets done for the week and send the email based on the status and comments in the ticket / share the status via the email for the week?

Answer from OpenAI trainer: There is currently no existing first party connector for Jira, but they are MCP connectors for JIRA available for users to fetch ticket information. Additionally, most of the connectors plan for the October launches are from ticketing systems so users can look forward to that. Regarding the triggering of actions such as sending of emails, this type of activity is within development within the next Paradigm for OpenAI, so users can stay tuned to further updates in upcoming months. Right now, OpenAI’s capabilities are more limited to the ‘drafting of email’ stage.

Can ChatGPT search the information in S4 Hana?

Not directly. ChatGPT Enterprise cannot automatically access S/4HANA or any other internal system out of the box. To enable this, it must be integrated with S/4HANA via APIs or connectors, so that ChatGPT can query approved datasets and return results. This setup is an example of Retrieval-Augmented Generation (RAG), where ChatGPT retrieves data from a connected knowledge base before generating answers. Any such integration would need to be developed and approved with Mediacorp’s IT and security teams to ensure compliance and data governance.

Future Trainings / Workshops

Can we have subsequent targeted workshops on very specific, specialised usage of ChatGPT?

The AI Team, in collaboration with trainers from OpenAI, are in the midst of planning and rolling out BU (Business Unit Enablement Sessions) to assist users with leveraging ChatGPT for specific tasks related to their work domains. We will be providing updates and sending out invitation progressively to users, in the following weeks and months ahead.

Data & Governance

If I use this company ChatGPT to help to do data compilation, how confidential are the information input into the platform?

ChatGPT Enterprise has been assessed and approved for use with confidential data and information (refer to this link). Nonetheless, BUs may exercise their respective discretion on whether to use the AI tool to process their data, and users should observe and exercise necessary caution when using the tool to uphold their data’s confidentiality.

Can we create images and use it for TV?

While ChatGPT Enterprise has been approved for content drafting, idea generation and brainstorming uses (see Annex B of Mediacorp AI Policy), we would not recommend using it to generate the images intended for direct broadcast purpose as-is, but rather as a visual guide for the final end-product to be produced in the conventional production methods. Moreover, Singapore law currently doesn’t clearly recognise copyright ownership for works generated by AI, thus that may pose issues in our copyright claim on the content if it’s AI-generated.

Who can see our data or spreadsheet when we upload them into ChatGPT Enterprise?

Each staff’s chats, including the documents that are uploaded as part of those chats, are viewable by that staff only, unless he/she shares them with other colleagues in the Enterprise workspace.

Is it safe to use real name in conversation?

Yes, you can technically use real names in prompts on ChatGPT Enterprise when it’s needed for your work, because there is enterprise-level security in place for our ChatGPT Enterprise subscription. That said, always avoid unnecessary sensitive details and follow Mediacorp’s data-handling policies, such as the PDPA. If unsure, treat names like you would in any official company system.

When we upload files to ChatGPT Enterprise, where are the files stored? Will the uploaded files be deleted after a certain period of time automatically?

Files uploaded as part of ChatGPT conversations are stored in OpenAI’s cloud infrastructure within Mediacorp’s subscribed workspace, tied to the user’s account and associated with the conversation. These files will be deleted when the associated conversation is deleted by the user. Files may also be uploaded as part of custom GPTs’ configuration during creation. The same deletion principle applies.

If a GPT agent needs to authenticate using our personal credentials (eg. username and password) to perform certain tasks, does this compromise the security of those credentials?

The short answer is they are not compromised. But a user has options as to how he or she would like to manage stored data. When you log onto a website through Agent, the login credentials itself are not stored, but the authentication of the session is stored for seven days. Users can go into settings, and then into data controls, where they have the ability to manage browser data. To not allow ChatGPT to have the ability to store the authentication between sessions, users can just turn off site data between sessions. It is also possible to completely delete all previous browsing data, by clicking on ‘Delete all’.

General / Administrative

I have a personal paid account. But since company has set up enterprise for us, can I transfer my chats over and cancel my own account?

Unfortunately, it is not possible to port history over from a personal account to your Mediacorp ChatGPT Enterprise account. This history transfer can only be done if your personal subscription was also using your Mediacorp email address.

How is ChatGPT different from other AI assistant like Gemini or Copilot?

ChatGPT Enterprise is a general-purpose AI assistant that works across many workflows, from writing and analysis to brainstorming and coding. Unlike Gemini and Copilot, which are strongest within their own ecosystems (Google Workspace and Microsoft 365), ChatGPT is not tied to a single suite of apps and offers flexibility and customisation through features like Custom GPTs and Agent Mode (feature coming soon!). If you are interested to explore other AI assistants, please feel free to write in to the AI team and we can see how we can support your needs.

How can we check if the license we have is enterprise one or not if joined Mediacorp recently?

To validate whether your ChatGPT account is under Mediacorp’s Enterprise License subscription, you will need to

1) Sign in using your Mediacorp account via single-sign on

2) Once in the account, you should see the Workspace name ‘Mediacorp’ at the bottom left corner

Where can we find the recording for both sessions?

The recordings of both 101 and 102 ChatGPT Onboarding Sessions are available for viewing on the LMS platform. Users can refer to this link or log in via SSO into LMS to view the recordings in the ‘ChatGPT’ Section under ‘Content’.


Jane Smith

Editor

Jane Smith has been the Editor-in-Chief at Urban Transport News for a decade, providing in-depth analysis and reporting on urban transportation systems and smart city initiatives. His work focuses on the intersection of technology and urban infrastructure.