This page is a compilation all the questions submitted through Pigeonhole during the ChatGPT 101/102 onboarding sessions held on 17 September. This collection includes responses to every question, including those that were addressed live by the OpenAI speaker. Use this resource to revisit key insights, clarify common queries, and explore additional information shared during the sessions.
Features
The AI Team is building a framework to enable API Key Retrieval via OpenAI’s API Platform. We will be sharing updates on this within the next 1 month.
The AI Team is building a framework to enable the Codex feature on ChatGPT. We will be sharing updates on this within the next 1 month. Meanwhile, you may visit this page published by OpenAI to learn more about the Codex feature and how it can be applied to your work.
If you missed the deadline, you may still proceed to apply for a license via iProfile, after discussing with your Manager about your need for a license. Requests are subject to approval and availability.
We understand that there is strong interest amongst staff in applying for access to the GPT-5 Pro model. Currently, access is only enabled through approval of requests raised in iProfile. The AI team will be monitoring the usage rates and patterns, as well as user feedback, to determine the best way forward for the enabling of such features. If you will like to request access to the GPT-5 Pro model, you may follow the steps published in this guide to apply for it through iProfile.
If you have a valid business use case, please proceed to apply for access for Deep Research feature through iProfile. The step by step instructions for doing so can be found in this link. However, as Deep Research is a feature heavy on credits consumption, the AI team will monitor the consumption rates and access may be restricted in the event that the organisation’s credits pool runs low.
Usage and Best Practices
You may refer to the ChatGPT 101 Training Recording on the LMS (Learning Management System) for some of the best practices shared around effective prompt writing. Additionally, feel free to reference this article for some suggested approaches to prompt writing. Alternatively, you can prompt ChatGPT itself for some recommendations or advice around prompt writing!
Answered by OpenAI Trainer: One way to use ChatGPT as a personal assistant is via Agent mode, where it can trigger actions across different domains. For more information on Agent mode, please refer to this article, or module 6 in the ChatGPT 102 Recording on the LMS (Learning Management System). A simpler way to consider is to run a query in chat (e.g, provide latest updates on topic of choice), and ask ChatGPT to run this query at a regular interval (e.g., every Monday morning at 9am). This will trigger the query into a ‘task’ that ChatGPT will perform on a regular basis. Another example of a query that can be used this way is asking ChatGPT to send reminders on the meetings you have scheduled for the day, and scheduling this as a ‘task’ that ChatGPT performs once or twice a day.
Building an agentic workflow may require additional tools other than ChatGPT. You can reach out to AI Team to have a discussion to better understand your use case.
To find out more about creating Custom GPTs and best practices for doing so, you may refer to modules 3 and 4 from the ChatGPT 102 Recording on LMS (Learning Management System). Alternatively, you may refer to this 2 articles:
Answered by OpenAI trainer: The best way to reduce hallucinations is to include instructions within the prompt. Simple instructions like ‘double check your output’, ‘verify facts’, ‘provide citations for resources used’ can help to minimize hallucinations. For output involving external references, verifying sources through the citations provided will help as well.
Answered by OpenAI trainer: The overarching principle is that GPT-5 model is generally better at performance areas where 4o model was considered to be good at. Any user should have a pathway to using GPT-5 model as the 4o model may be sunset as soon as in less than a month. If any user encounters challenges or friction in the migration pathway to GPT-5 model, please inform the AI team so that appropriate feedback can be passed on to the OpenAI team
RAG (Retrieval Augmented Generation) is possible when a connection is made from ChatGPT to data sources (e.g., through the OpenAI API or connectors). Then, the LLM model can retrieve information dynamically from an external knowledge source (like a database or knowledge base) and proceed to generate a response.
In general, System Prompt refers to the instruction that sets ChatGPT’s overall behaviour and guides its response. Mark down is a specific type of language or format that is better structured for ChatGPT to understand. For more information on best practices for prompt writing and engineering, you may refer to module 12 of the ChatGPT 101 Recording on LMS (Learning Management System) or the article linked here.
Answered by OpenAI trainer: It is still important to have the fundamental skill in being able to tweak a prompt to meet our goals and having the judgement to understand if a prompt is helping you meet your goals. We might not need to write it from scratch, but we need to be able to know how to modify a prompt if the output we’re getting is not satisfactory. In other words, it is still important to be able to prompt to write a prompt.
ChatGPT supports many non-English languages and generally performs well for both input and output. However, its strongest performance is still in English, since most of its training data is in English. For widely used languages such as Chinese, Spanish, or French, the quality is usually very good, but for less common languages or highly specialised contexts, responses may be less fluent or accurate.
Yes, ChatGPT can be used to translate copy and scripts. It often goes beyond direct word-for-word translation by taking into account tone, style, and context, which can make it more suitable for marketing or creative work. In comparison, tools like Google Translate are optimised for literal, fast translations and may be better for short, straightforward text. While ChatGPT’s translations are generally accurate for major languages, it’s still best practice to have a human reviewer check important content before sharing or publishing.
Yes, if the ‘Web Search’ tool is enabled, ChatGPT can retrieve information from the internet and use it to provide more up-to-date and contextual answers. By default, browsing is not always turned on. Without browsing, ChatGPT answers based only on its trained knowledge and any files or context you provide.
Projects are designed to give you a dedicated workspace where you can group together chats, files, instructions, and collaborators around a specific initiative. Unlike a single chat, which can quickly get cluttered or lose context, a Project keeps all related material organised and consistent — making it easier to revisit, share, and build on over time. You can use one long chat for multiple projects, but using Projects ensures clearer separation, better knowledge management, and avoids mixing up contexts across different workflows.
Answered by OpenAI trainer: As of now, ChatGPT cannot transcribe audio. If this functionality is required, users can potentially build on top of OpenAI’s API solution, which is a little bit more advanced, but there are audio transcription APIs available. Otherwise, once there is a transcript available for the audio file of interest, users can then use that in ChatGPT. So for example, if a meeting transcript is available from whatever meeting tool you use, you can put that in ChatGPT and then summarize that. But it does not currently have capability to consume and transcribed audio file within ChatGPT itself.
Answered by OpenAI trainer: ChatGPT itself doesn’t replace design tools like Figma, but with Canvas it can generate working code for interactive web pages, dashboards, or simple UI prototypes. The code (often in frameworks like React) can be run, previewed, and edited directly in Canvas, or exported into your own editor for further development. In short, you can use ChatGPT to prototype interactive interfaces in code, but not full drag-and-drop design mock-ups like in Figma.
If there is limited online information about a specific job role, the Custom GPT may generate more general suggestions instead of highly role-specific examples. It will still try to draw from related tasks, skills, and common workplace scenarios, but the results may be broader or less detailed. You can always refine it by adding your own context or uploading documents, which helps ChatGPT tailor answers more closely to your team’s needs.
An Action is when a Custom GPT connects to an external tool or API to do something beyond text generation. For example, you could create a Custom GPT that: Calls a calendar API to check your availability and schedule a meeting. Please see further examples as follows:
1) Connects to a knowledge base or database to pull the latest sales figures.
2) Triggers a task in Jira, Asana, or Trello when you ask it to “create a new ticket.”
No, it is possible to share all the ingredients of a ChatGPT agent like the prompt, but the actual agent itself is private to the user.
Answer from OpenAI trainer: There is currently no existing first party connector for Jira, but they are MCP connectors for JIRA available for users to fetch ticket information. Additionally, most of the connectors plan for the October launches are from ticketing systems so users can look forward to that. Regarding the triggering of actions such as sending of emails, this type of activity is within development within the next Paradigm for OpenAI, so users can stay tuned to further updates in upcoming months. Right now, OpenAI’s capabilities are more limited to the ‘drafting of email’ stage.
Not directly. ChatGPT Enterprise cannot automatically access S/4HANA or any other internal system out of the box. To enable this, it must be integrated with S/4HANA via APIs or connectors, so that ChatGPT can query approved datasets and return results. This setup is an example of Retrieval-Augmented Generation (RAG), where ChatGPT retrieves data from a connected knowledge base before generating answers. Any such integration would need to be developed and approved with Mediacorp’s IT and security teams to ensure compliance and data governance.
Future Trainings / Workshops
The AI Team, in collaboration with trainers from OpenAI, are in the midst of planning and rolling out BU (Business Unit Enablement Sessions) to assist users with leveraging ChatGPT for specific tasks related to their work domains. We will be providing updates and sending out invitation progressively to users, in the following weeks and months ahead.
Data & Governance
ChatGPT Enterprise has been assessed and approved for use with confidential data and information (refer to this link). Nonetheless, BUs may exercise their respective discretion on whether to use the AI tool to process their data, and users should observe and exercise necessary caution when using the tool to uphold their data’s confidentiality.
While ChatGPT Enterprise has been approved for content drafting, idea generation and brainstorming uses (see Annex B of Mediacorp AI Policy), we would not recommend using it to generate the images intended for direct broadcast purpose as-is, but rather as a visual guide for the final end-product to be produced in the conventional production methods. Moreover, Singapore law currently doesn’t clearly recognise copyright ownership for works generated by AI, thus that may pose issues in our copyright claim on the content if it’s AI-generated.
Each staff’s chats, including the documents that are uploaded as part of those chats, are viewable by that staff only, unless he/she shares them with other colleagues in the Enterprise workspace.
Yes, you can technically use real names in prompts on ChatGPT Enterprise when it’s needed for your work, because there is enterprise-level security in place for our ChatGPT Enterprise subscription. That said, always avoid unnecessary sensitive details and follow Mediacorp’s data-handling policies, such as the PDPA. If unsure, treat names like you would in any official company system.
Files uploaded as part of ChatGPT conversations are stored in OpenAI’s cloud infrastructure within Mediacorp’s subscribed workspace, tied to the user’s account and associated with the conversation. These files will be deleted when the associated conversation is deleted by the user. Files may also be uploaded as part of custom GPTs’ configuration during creation. The same deletion principle applies.
The short answer is they are not compromised. But a user has options as to how he or she would like to manage stored data. When you log onto a website through Agent, the login credentials itself are not stored, but the authentication of the session is stored for seven days. Users can go into settings, and then into data controls, where they have the ability to manage browser data. To not allow ChatGPT to have the ability to store the authentication between sessions, users can just turn off site data between sessions. It is also possible to completely delete all previous browsing data, by clicking on ‘Delete all’.
General / Administrative
Unfortunately, it is not possible to port history over from a personal account to your Mediacorp ChatGPT Enterprise account. This history transfer can only be done if your personal subscription was also using your Mediacorp email address.
ChatGPT Enterprise is a general-purpose AI assistant that works across many workflows, from writing and analysis to brainstorming and coding. Unlike Gemini and Copilot, which are strongest within their own ecosystems (Google Workspace and Microsoft 365), ChatGPT is not tied to a single suite of apps and offers flexibility and customisation through features like Custom GPTs and Agent Mode (feature coming soon!). If you are interested to explore other AI assistants, please feel free to write in to the AI team and we can see how we can support your needs.
To validate whether your ChatGPT account is under Mediacorp’s Enterprise License subscription, you will need to
1) Sign in using your Mediacorp account via single-sign on
2) Once in the account, you should see the Workspace name ‘Mediacorp’ at the bottom left corner
The recordings of both 101 and 102 ChatGPT Onboarding Sessions are available for viewing on the LMS platform. Users can refer to this link or log in via SSO into LMS to view the recordings in the ‘ChatGPT’ Section under ‘Content’.

