For the best experience, increase the window size or view on a laptop or desktop device
Title | ||
|---|---|---|
Loading... | ||
For the best experience, increase the window size or view on a laptop or desktop device
Title | ||
|---|---|---|
Loading... | ||
I interviewed for the Applied AI position, essentially the ChatGPT backend. The Virtual Onsite (VO) was akin to a general hire, followed by team matching. The VO consisted of 5 rounds: coding + system design + cross-functional (XFN) + hiring manager + project deep dive.
To secure an offer, one likely needs to perform strongly in every round. Unfortunately, I was eliminated during the coding round for requiring too many hints (which was unexpected, as I had solved and executed all problems 😒).
Coding: The recruiter specified there would be four questions related to coroutine/generator/iterator, which actually resembled the iterator tags on LeetCode. The tasks involved implementing next(), get_state(), and set_state() functions, with the state being resumable. The trickiest part was in the second question, where the interviewer asked to write tests. I wrote a few unit tests casually, but this was a critical section. The interviewer expected a comprehensive unit test that could examine all scenarios of get_state and set_state, including all states and covering the StopIteration condition. The third question involved implementing a class to iterate over a list, with the requirement that the written tests must pass. The fourth question didn't require execution; it provided a class that could iterate over a JSON file, and the task was to implement a class with the same interface that could iterate over a list of such iterators. It's hard to gauge the difficulty, but it felt subtly challenging.
System Design: The task was to design FourSquare. The clear requirement was to design a get_poi feature. I proposed two approaches: geohash and quadtree, but the feedback suggested it wasn't strong enough, which was somewhat frustrating. It seemed like a simple question, but they were nitpicking.
Hiring Manager: This round was conducted by a woman and the feedback was relatively positive. She asked about a dozen tough questions, such as reasons for changing jobs, why OpenAI, any challenging situations, any constructive feedback given or received, any failures, and potential reasons for leaving OpenAI. I recommend practicing by recording oneself to keep answers concise, or mock interviewing with experienced managers to pinpoint areas for improvement.
XFN: This round consisted of just two questions: walking through an example of working with a PM, and sharing experiences of competing priorities with XFN teams. This round and the hiring manager round were similar, both focusing on behavioral questions. Improvements can be made through mock interviews with actual managers or PMs to receive valuable feedback. Practicing extensively helped me receive positive feedback in this round.
Project Deep Dive: This round was about telling a complete technical story, covering the why, what, how, and learnings and takeaways. It's possible to apply the STAR method from behavioral questions, incorporating technical details. Receiving negative feedback in this round would be challenging.
Overall, it seems OpenAI is looking for candidates who are not only experienced but also possess strong leadership qualities, can drive the interview without exposing weaknesses, and leave a lasting impression on the interviewers. Securing an offer appears to be quite difficult.
Community discussion, answers, and follow-up details for this question.
Log in to post a comment.
I interviewed for the Applied AI position, essentially the ChatGPT backend. The Virtual Onsite (VO) was akin to a general hire, followed by team matching. The VO consisted of 5 rounds: coding + system design + cross-functional (XFN) + hiring manager + project deep dive.
To secure an offer, one likely needs to perform strongly in every round. Unfortunately, I was eliminated during the coding round for requiring too many hints (which was unexpected, as I had solved and executed all problems 😒).
Coding: The recruiter specified there would be four questions related to coroutine/generator/iterator, which actually resembled the iterator tags on LeetCode. The tasks involved implementing next(), get_state(), and set_state() functions, with the state being resumable. The trickiest part was in the second question, where the interviewer asked to write tests. I wrote a few unit tests casually, but this was a critical section. The interviewer expected a comprehensive unit test that could examine all scenarios of get_state and set_state, including all states and covering the StopIteration condition. The third question involved implementing a class to iterate over a list, with the requirement that the written tests must pass. The fourth question didn't require execution; it provided a class that could iterate over a JSON file, and the task was to implement a class with the same interface that could iterate over a list of such iterators. It's hard to gauge the difficulty, but it felt subtly challenging.
System Design: The task was to design FourSquare. The clear requirement was to design a get_poi feature. I proposed two approaches: geohash and quadtree, but the feedback suggested it wasn't strong enough, which was somewhat frustrating. It seemed like a simple question, but they were nitpicking.
Hiring Manager: This round was conducted by a woman and the feedback was relatively positive. She asked about a dozen tough questions, such as reasons for changing jobs, why OpenAI, any challenging situations, any constructive feedback given or received, any failures, and potential reasons for leaving OpenAI. I recommend practicing by recording oneself to keep answers concise, or mock interviewing with experienced managers to pinpoint areas for improvement.
XFN: This round consisted of just two questions: walking through an example of working with a PM, and sharing experiences of competing priorities with XFN teams. This round and the hiring manager round were similar, both focusing on behavioral questions. Improvements can be made through mock interviews with actual managers or PMs to receive valuable feedback. Practicing extensively helped me receive positive feedback in this round.
Project Deep Dive: This round was about telling a complete technical story, covering the why, what, how, and learnings and takeaways. It's possible to apply the STAR method from behavioral questions, incorporating technical details. Receiving negative feedback in this round would be challenging.
Overall, it seems OpenAI is looking for candidates who are not only experienced but also possess strong leadership qualities, can drive the interview without exposing weaknesses, and leave a lasting impression on the interviewers. Securing an offer appears to be quite difficult.
Community discussion, answers, and follow-up details for this question.
Log in to post a comment.
Crowdsourced Interview Question Bank for Job Seekers in Tech/Finance Industry