When I’m interviewing I never ask a question about something I know super well. I circle around what the candidate signals he has great passion and understanding at, and start deep diving into that.
If I know about it as well, then we can have a really deep discussion, if I don’t- then I can learn something new.
The aim when interviewing is to check how well / deeply the interviewee can think through a problem.
If we pick a topic that they don’t have deep knowledge - they can either stumble and freeze emotionally, or hallucinate something just to appear smart. At this point it is an examination not an interview. And sure some people are capable enough to get to an answer, but that’s more of a lottery than a real analysis.
It usually boils down to how often have they interviewed before and been in a similar situation. And “people who have interviewed a lot” is hardly a metric I want to optimise for.
Now picking something they know or have expressed interest or passion in, this means we are sure to have more signal than noise. If the interviewee’s description is more of a canned response - then I delve deeper or broader.
“I’ve managed to solve this issue by introducing caching” - “Great, are there other solutions? How do you handle cache invalidation, what are the limits? What will you do if the load increases 10 fold, would you need to rethink the solution?”
Live coding during an interview is one of the most oppressive things I’ve witnessed in the industry in general.
There is usually a huge disconnect between someone who knows that “this task should take 20mins” and doing it cold in a super high-pressure environment.
People sweat, panic, brain freeze, and are just plain out stressed.
I’ll only OK something like this if we give out a similar but not the same task before the interview so a person can train a bit beforehand.
I’ve heard it all justified as “we want to see how you perform under pressure” but to me that has always sounded super flimsy - like if this is representative of how work is done at this organisation, then do I want to work there in the first place? And if it isn’t, why the hell are you putting people through this ringer in the first place, just sounds inhumane.
Yea, there's really no way to do an "interview assignment" well.
If you give unlimited amount of time, you're giving an advantage to people with no life who can just focus on your assignment and polish it as if it were a full time job.
If you give a limited amount of time, then you're making the interview a pressure cooker with a countdown clock, giving a disadvantage to people who are just not great at working under minute-to-minute time pressure.
Depends on the purpose. If you treat it as a minimum bar to pass and are up front about and actually adhere to that then anyone spending more than the limit on it is presumably just wasting his own time (and to an extent the company's because the application process continues). It only becomes a problem if instead of an objective pass/fail metric you start gauging other details that would benefit from additional time spent.
Thanks for clarifying - I kinda get the idea but would love to see an example for this.
I’ve mostly given up on all of the standard techniques for interviewing sadly, just because “using ai” makes a lot of them trivial, and have resorted to the good old fashioned interview, where I screen for drive, values and root cause seeking, and let people learn tech/frameworks/etc themselves.
But I was wondering, isn’t a take home question still good, if you give a more open ended and ambitious task, and let people vibe code the solution, review the result but ask for the prompt/session as well?
People will be doing that during normal work anyway, so why not test that directly?
One such question (obviously tailored to the role I'm hiring for) is asking whether SoA or AoS inputs will yield a faster dot-product implementation and whether the answer changes for small vs large inputs, also asking why that would be the case.
I typically offer a test with a small number of such questions since each one individually is noisy, but overall the take-home has good signal.
> why not test that directly?
The big thing is that you don't have enough time to probe everything about a candidate, especially if you're being respectful of their time and not burning too much of yours. Your goal is to maximize information gain with respect to the things you care about while minimizing any negative feelings the candidate has about your company.
I could be wrong, but vibe coding feels like another skill which is more efficient to probe indirectly. In your example, I would care about the prompt/session, mostly wouldn't care about the resulting code, and still don't think I would have enough information to judge whether they were any good. There are things I would want to test beyond the vibe coding itself.
In particular, one thing I think is important is being able to reason about code and deeply understand the tradeoffs being made. Even if vibe coding is your job and you're usually able to go straight from Claude to prod, it's detrimental (for the roles I'm looking at) to not be able to easily spot memory leaks, counter-productive OO abstractions, a lack of productive OO abstractions, a host of concurrency issues LLMs are kind of just bad at right now, and so on. My opinion is that the understanding needed to use LLMs effectively (for the code I work on) is much more expensive to develop than any prompt engineering, so I'd rather test those other things directly.
This seems quite amazing really, thanks for sharing
What is the scope of projects / features you’ve seen this be successful at?
Do you have a step before where an agent verifies that your new feature spec is not contradictory, ambiguous etc. Maybe as reviewed with regards to all the current feature sets?
Do you make this a cycle per step - by breaking down the feature to small implementable and verifiable sub-features and coding them in sequence, or do you tell it to write all the tests first and then have at it with implementation and refactoring?
Why not refactor-red-green-refactor cycle? E.g. a lot of the time it is worth refactoring the existing code first, to make a new implementation easier, is it worth encoding this into the harness?
I do it per feature, not per step. Write the AC for the whole feature upfront, then the agent builds against it. I haven't added a spec-validation step before coding but that's a good idea. Catching ambiguity in the spec before the agent runs with it would save a lot of rework
Do you need react at this point? Isn’t it just html/css/components?
I remember the birth of React was because Facebook had a problem - you would add a comment and your notification bar would sometimes not get updated.
They had so many bugs with normal html / css that they wanted to solve this on the application layer - to make inconsistent UI elements unrepresentable.
So they came up with react with global state - because in their use case changing one thing does affect a bunch of other unrelated things, and they all need to sync together.
I mean honestly that’s what I use React _for_ - especially with contexts it’s very easy to express all of this complex interconnected state of a webapp in a consistent way.
And of course there are other ways to solve it - for example with elixir/phoenix you just push all that complexity to the backend and trust in websockets and the BEAM.
I just feel that if you really don’t need global state, then react kinda isn’t needed as well…
> I just feel that if you really don’t need global state, then react kinda isn’t needed as well…
I don't know, in my mind "re-render (efficiently) when state changes" is the core point of react and similar frameworks. That requirement still stands even if I have a smaller, local state.
Honestly, this seems very much like the jump from being an individual contributor to being an engineering manager.
The time it happened for me was rather abrupt, with no training in between, and the feeling was eerily similar.
You know _exactly_ why the best solution is, you talk to your reports, but they have minds of their own, as well as egos, and they do things … their own way.
At some point I stopped obsessing with details and was just giving guidance and direction only in the cases where it really mattered, or when asked, but let people make their own mistakes.
Now LLMs don’t really learn on their own or anything, but the feeling of “letting go of small trivial things” is sorta similar. You concentrate on the bigger picture, and if it chose to do an iterative for loop instead of using a functional approach the way you like it … well the tests still pass, don’t they.
The only issue is that as an engineering manager you reasonably expect that the team learns new things, improve their skills, in general grow as engineers. With AI and its context handling you're working with a team where each member has severe brain damage that affects their ability to form long term memories.
You can rewire their brain to a degree teaching them new "skills" or giving them new tools, but they still don't actually learn from their mistakes or their experiences.
As a manager I would encourage them to use the LLM tools. I would also encourage unit tests, e2e testing, testing coverages, CI pipelines automating the testing, automatic pr reviewing etc...
It's also peeking at the big/impactful changes and ignoring the small ones.
Your job isn't to make sure they don't have "brain damage" its to keep them productive and not shipping mistakes.
Being optimistic (or pessimistic heh), if things keep the trend then the models will evolve as well and will probably be quite better in one year than they are now.
I highly advise traveling - went a similar experience - had some savings that could last for a few years (much longer if I stretched them)
So just decided to get a motorbike license and go check out Asia.
Ended up finding a partner (totally unexpected) selling everything, moving abroad, marrying them and now expecting a child (planned), all in a manner of 3 years.
Has been quite the joyful and interesting experience, all after I had the deeply depressing feeling of having “solved life” at my nice position in the EU.
There are so many places in the world where you can feel you are actually doing great service to the community, on a shoestring budget and feel happy and fulfilled.
I like what Singapore is doing - having a government built “base level” of housing that is both abundant and readily available - it can anchor the price where deep excesses are harder to end up with.
It’s like a market where a very significant player keeps the price law, because of its own reasons.
In such a scenario the price will not go up as sharply, so there would be less incentive for people to buy real estate just as a financial vehicle.
And the government can also prioritise who it sells the units it builds to - e.g. not investors.
I honestly am surprised why western governments are not trying this.
Yes - in the UK we had a strong social housing sector, with award winning architecture (and some terrible mistakes too).
Then along came 'right-to-buy', allowing tenants to buy their social housing for knock down prices (and so become a natural Tory [right of centre party] voter).
If councils had been allowed to use the money to build more social housing, then maybe this was fine. But they were not. So now we have affordability issues in the UK too.
I still remember the joy of using the flagship rails application - basecamp.
Minimal JS, at least compared to now, mostly backend rendering, everything felt really fast and magical to use.
Now they accomplished this by imposing a lot of constraints on what you could do, but honestly it was solid UX at the time so it was fine.
Like the things you could do were just sane things to do in the first place, thus it felt quite ok as a dev.
React apps, _especially_ ones hosted on Next.js rarely feel as snappy, and that is with the benefit of 15 years of engineering and a few order of magnitude perf improvement to most of the tech pieces of the stack.
It’s just wild to me that we had faster web apps, with better organizarion, better dev ex, faster to build and easier to maintain.
The only “wins” I can see for a nextjs project is flexibility, animation (though this is also debatable), and maybe deployment cost, but again I’m comparing to deploying rails 15 years ago, things have improved there as well I’m sure.
I know react can accomplish _a ton_ more on the front end but few projects actually need that power.
Isn’t that what Phoenix (Elixir) is? All server side, small js lib for partial loads, each individual website user gets their own thread on the backend with its own state and everything is tied together with websockets.
Basically you write only backend code, with all the tools available there, and a thin library makes sure to stich the user input to your backend functions and output to the front end code.
Websockets+thin JS are best for real time stuff more than standard CRUD forms. It will fill in for a ton of high-interactivity usecases where people often reach for React/Vue (then end up pushing absolutely everything needlessly into JS). While keeping most important logic on the server with far less duplication.
For simple forms personally I find the server-by-default solution of https://turbo.hotwired.dev/ to be far better where the server just sends HTML over the wire and a JS library morph-replaces a subset of the DOM, instead of doing full page reloads (ie, clicking edit to in-place change a small form, instead of redirecting to one big form).
Idk about Phoenix, but having tried Blazor, the DX is really nice. It's just a terrible technical solution, and network latency / spotty wifi makes the page feel laggy. Not to mention it eats up server resources to do what could be done on the client instead with way fewer moving parts. Really the only advantage is you don't have to write JS.
It's basically what Phoenix LiveView specifically is. That's only one way to do it, and Phoenix is completely capable of traditional server rendering and SPA style development as well.
LiveView does provide the tools to simulate latency and move some interactions to be purely client side, but it's the developers' responsibility to take advantage of those and we know how that usually goes...
It's extremely nice! Coming from the React and Next.js world there is very little that I miss. I prefer to obsess over tests, business logic, scale and maintainability, but the price I pay is that I am no longer able to obsess over frontend micro-interactions.
Not the right platform for every product obviously, but I am starting to believe it is a very good choice for most.
If I know about it as well, then we can have a really deep discussion, if I don’t- then I can learn something new.
The aim when interviewing is to check how well / deeply the interviewee can think through a problem.
If we pick a topic that they don’t have deep knowledge - they can either stumble and freeze emotionally, or hallucinate something just to appear smart. At this point it is an examination not an interview. And sure some people are capable enough to get to an answer, but that’s more of a lottery than a real analysis.
It usually boils down to how often have they interviewed before and been in a similar situation. And “people who have interviewed a lot” is hardly a metric I want to optimise for.
Now picking something they know or have expressed interest or passion in, this means we are sure to have more signal than noise. If the interviewee’s description is more of a canned response - then I delve deeper or broader.
“I’ve managed to solve this issue by introducing caching” - “Great, are there other solutions? How do you handle cache invalidation, what are the limits? What will you do if the load increases 10 fold, would you need to rethink the solution?”
reply