Hacker Timesnew | past | comments | ask | show | jobs | submit | IgorPartola's commentslogin

I used to use rebase much more than merge but have grown to be more nuanced over the years:

Merge commits from main into a feature branch are totally fine and easier to do than rebasing. After your feature branch is complete you can do one final main-to-feature-branch merge and then merge the feature branch into main with a squash commit.

When updating any branch from remote, I always do a pull rebase to avoid merge commits from a simple pull. This works well 99.99% of the time since what I have changed vs what the remote has changed is obvious to me.

When I work on a project with a dev branch I treat feature branches as coming off dev instead of main. In this case I merge dev into feature branches, then merge feature branches into dev via a squash commit, and then merge main into dev and dev into main as the final step. This way I have a few merge commits on dev and main but only when there is something like an emergency fix that happens on main.

The problem with always using a rebase is that you have to reconcile conflicts at every commit along the way instead of just the final result. That can be a lot more work for commits that will never actually be used to run the code and can in fact mess up your history. Think of it like this:

1. You create branch foo off main.

2. You make an emergency commit to main called X.

3. You create commits A, B, and C on foo to do your feature work. The feature is now complete.

4. You rebase foo off main and have to resolve the conflict introduced by X happening before A. Let’s say it conflicts with all three of your commits (A, B, and C).

5. You can now merge foo into main with it being a fast forward commit.

Notice that at no point will you want to run the codebase such that it has commits XA or XAB. You only want to run it as XABC. In fact you won’t even test if your code works in the state XA or XAB so there is little point in having those checkpoints. You care about three states: main before any of this happened since it was deployed like that, main + X since it was deployed like that, and main with XABC since you added a feature. git blame is really the only time you will ever possibly look at commits A and B individually and even then the utility of it is so limited it isn’t worth it.

The reality is that if you only want fast forward commits, chances are you are doing very little to go back and extract code out of old versions a of the codebase. You can tell this by asking yourself: “if I deleted all my git history from main and have just the current state + feature branches off it, will anything bad happen to my production system?” If not, you are not really doing most of what git can do (which is a good thing).


I am now wholly bought into the idea of having a feature branch with (A->B->C) commits is an anti-pattern.

Instead, if the feature doesn't work without the full chain of A+B+C, either the code introduced in A+B is orphaned except by tests and C joins it in; or (and preferably for a feature of any significance), A introduces a feature flag which disables it, and a subsequent commit D removes the feature flag, after it is turned on at a time separate to merge and deploy.


I treat each feature branch as my own personal playground. There should be zero reason for anyone to ever look at it. Sometimes they aren’t even pushed upstream. Otherwise, just work on main with linear history and feature flags and avoid all this complexity that way.

Just like you don’t expect someone else’s local codebase to always be in a fully working state since they are actively working on it, why do you expect their working branch to be in a working state?


I think you're somewhat missing the point - if the code from A and B only works if joined with C, then you should squash them all into one commit so that they can't be separated. If you do that then the problem you're describing goes away since you'll only be rebasing a single commit anyway.

Whether this is valuable is up to you, but IMO I'd say it's better practice than not. People do dumb things with the history and it's harder to do dumb things if the commits are self-contained. Additionally if a feature branch includes multiple commits + merges I'd much rather they squash that into a single commit (or a couple logical commits) instead of keeping what's likely a mess of a history anyway.


That is literally what I advocate you do for the main branch. A feature branch is allowed to have WIP commits that make sense for the developer working on the branch just like uncommitted code might not be self contained because it is WIP. Once the feature is complete, squash it into one commit and merge it into main. There is very little value to those WIP commits (rare case being when you implement algorithm X but then change to Y and later want to experiment with X again).

One downside of squash merging is that when you need to split your work across branches, so that they're different PRs, but one depends on the other, then you have to do a rebase after every single one which had dependencies is merged.

When that happens I essentially pick one of the branches as the trunk for that feature and squash merge into that, test it, then merge a clean history into main.

Let’s see if I get this wrong after 25 years of git:

ours means what is in my local codebase.

theirs means what is being merged into my local codebase.

I find it best to avoid merge conflicts than to try to resolve them. Strategies that keep branches short lived and frequently merging main into them helps a lot.


That's kind of the simplest case, though, where "theirs" and "ours" makes obvious sense.

What if I'm rebasing a branch onto another? Is "ours" the branch being rebased, or the other one? Or if I'm applying a stash?


"Ours" and "theirs" make sense in most cases (since "ours" refers to the HEAD you're merging into).

Rebases are the sole exception (in typical use) because ours/theirs is reversed, since you're merging HEAD into the other branch. Personally, I prefer merge commits over rebases if possible; they make PRs harder for others to review by breaking the "see changes since last review" feature. Git generally works better without rebases and squash commits.


Wow, interesting to see such a diametrically opposed view. We’ve banned merge commits internally and our entire workflow is rebase driven. Generally, I find that rebases are far better at keeping Git history clean and clearly allowing you to see the diff between the base you’re merging into and the changes you’ve made.

"Clean" is not the same as "useful". You have to be really, really disciplined to not make a superficially looking "clean" history which may appear linear but which is actually total nonsense.

For example, if one is frequently doing "fix after rebase" commits, then they are doing it wrong and are making a history which is much less useful than a seemingly more complicated merge based history. Rebased histories are only clean if they also tell a true story after the rebase, but if you push "rebase fixes" onto the end of your history, then it means that prior rebased commits no longer make any sense because they e.g. use APIs that aren't actually there. Giving up and squashing everything to one commit is almost better in this case because it at least won't throw off someone who is trying to make sense of the history in the future.

I think that rebasing has won over merges mostly because the tools for navigating git histories suck SO HARD. I have used Perforce at a previous job, and their graphical tools for navigating a merge based history are excellent and were really useful for doing code archeology.


Generally our pattern is that every PR gets rebased into sensible commits. So in a way we are doing "squash commits" but the method is an interactive rebase. This keeps our history very pretty and clean, and simultaneously easy to grok and navigate.

My favorite git GUI is Sublime Merge.


Yes, I prefer that approach as well because it allows the person who authored the change to do all the work of deciding how to resolve conflicts up front (and allows reviewers to review that conflict resolution) instead of forcing whoever eventually does the merge to figure everything out after the fact. It also removes conflicts from the history so you never have to think about them later after the rebase/merge process is finished.

> Git generally works better without rebases and squash commits.

If squash commits make Git harder for you, that's a tell that your branches are trying to do too many things before merging back into main.


I don't know. Even when I'm working on my own private repositories across several machines, I really, really dislike regular merges. You get an ugly commit message and I can never get git log to show me the information I actually want to see.

For me, rebasing is the simplest and easiest to understand, and it allows you to squash some of your commits so that it's one commit per feature / bug-fix / logical unit of work. I'll also frequently rebase and squash commits in my work branch too, where I've temporarily committed something and then fixed a bug before it's been pushed into main, I'll just reorder and squash the relevant commits into one.


I completely agree, since doing rebase our history looks fantastic and it makes finding things, cherrypicking and generating changelogs really simple. Why not be neat, it's cost us nothing and you can make yourself a tutorial on Claude if you don't understand rebasing pretty easily.

Don't do squash commits, just rebase -i your branch before merging so you only have one commit. It's pretty trivial to do.

> What if I'm rebasing a branch onto another?

Just checkout the branch you are merging/rebasing into before doing it.

> Or if I'm applying a stash?

The stash is in that case effectively a remote branch you are merging into your local codebase. ours is your local, theirs is the stash.


The thing is, you'll typically switch to master to merge your own branch. This makes your own branch 'theirs', which is where the confusion comes from.

Not me. I typically merge main onto a feature branch where all the conflicts are resolved in a sane way. Then I checkout main and merge the feature branch into it with no conflicts.

As a bonus I can then also merge the feature branch into main as a squash commit, ditching the history of a feature branch for one large commit that implements the feature. There is no point in having half implemented and/or buggy commits from the feature branch clogging up my main history. Nobody should ever need to revert main to that state and if I really really need to look at that particular code commit I can still find it in the feature branch history.


Yep. This is the only model that has worked well for me for more than a decade.

This is what I do, and I was taught by an experienced Git user over a decade ago. I've been doing it ever since. All my merges into main are fast forwards.

> ours means what is in my local codebase

Since it's always one person doing a merge, why isn't it "mine" instead of "ours"? There aren't five of us at my computer collaboratively merging in a PR. There is one person doing it.

"Ours" makes it sound like some branch everyone who's working on the repo already has access to, not the active branch on my machine.


That's between you and git.

a better (more confusing) example:

i have a branch and i want to merge that branch into main.

is ours the branch and main theirs? or is ours main, and the branch theirs?


I always checkout the branch I am merging something into. I was vaguely aware I could have main checked out but merge foo into bar but have never once done that.

  git checkout mybranch
  git rebase main
A conflict happens. Now "ours" is main and "theirs" is mybranch, even though from your perspective you're still on mybranch. Git isn't, however.

Ah that’s fair. This is why I would do a `git merge main` instead of a rebase here.

I have met more than one person who would doggedly tolerate rebase, not even using rerere, instead of doing a simple ‘git merge --no-ff’ to one-shot it, not understanding that rebase touches every commit in the diff between main and not simply the latest change on HEAD.

Not a problem if you are a purist on linear history.


not understanding that rebase touches every commit in the diff

it sounds like that's a problem for you. why would that be? i prefer rebase and fast forward, but i am fully aware that rebase rewrites all commits.


> Let’s see if I get this wrong after 25 years of git

You used it 5 years before Linus? Impressive!


Haha yes. You caught me :)

I was wondering when someone was going to point it out. I actually have only been using it since about 2009 after a brief flirtation with SVN and a horrible breakup with CVS.


Exactly this. But my question here is also: is there not a competitive advantage to a big enterprise that applies standards in a more intelligent way? You have a SaaS, I have a Fortune 500 company that could use your product but I cannot use it because my procurement process is as long and winding ad the Road to Hana. In the meantime my competitor has a smarter procurement process that takes into account the impact and risk involved in renting your software. Don’t they get a competitive advantage over me by having a better process and as a result getting better vendors?

Unfortunately in most cases the buyers have way more liability/risk using a small vendor than opportunity. Often this is coming from regulators in certain industries.

In scenarios where the company REALLY REALLY wants to buy the SaaS, they often will invest in the company, one of the reasons for which being to ensure they have the resources to go through all the red tape.


Or that API prices are inflated. We don’t get to see what their internal financials look like. My guess is that your guess is more correct but it is unclear what is actually happening.

What about the other times?


In theory because the code being added is introducing a feature so compelling that it is worth it. In practice, that’s rarely the case.

My personal approach to open source is more or less that when I need a piece of software to exist that does not and there is no good reason to keep it private, it becomes open source. I don’t do it for fun, I do it because I need it and might as well share it. If someone sends me a patch that enhances my use case, I will work with them to incorporate it. If they send me a patch that only benefits them it becomes a calculus of how much effort would it take for me to review it. If the effort is high, my advice is to fork the project or make it easier for me to review. Granted I don’t maintain huge or vital projects, but that’s precisely why: I don’t need yet another programming language or runtime to exist and I wouldn’t want to work on one for fun.


Most large companies will continue quarterly reporting because institutional investors will not accept anything else. For a company with market cap of $500m spending $1-2m yearly on quarterly audits is non-trivial. For a company that’s $5b and up that’s not much at all.

This is also not a done deal and large pension funds will oppose this hard during the public comment portion of this process.


There are two separate problems with IPv4 and only one applies to IPv6. Allowing incoming connections through a restrictive firewall is applicable to both. Address mangling via NAT applies only to one. Note also that in the IPv4 world you might be behind more than one layer of NAT which will make everything infinitely worse.

Honestly ISPs really missed an opportunity to essentially provide IPv6-only as a service and add an IPv4 compatibility layer to that (IPv6 already has a mechanism built in for this but grandma’s old laptop might not fully support it so you might need a local router provided by the ISP to give you native local IPv4 that allows you to access the internet) instead of CGNAT. But they chose to go with duct tape, spit, paper clips, and hope instead of investing in the correct solution. Shame on them and too bad for us.


Exactly. And look, the linked Python script only solves one problem: making both firewalls believe that the party behind them is the one who initiated the connection. Address/port mangling is not addressed at all, both public addresses need to be provided externally.

And it's simply not true that there is no NAT in the wild with IPv6: every OPNsense installation with two uplinks and the need for anything better than an "arbitrary and uncontrollable" choice of the correct uplink for each outbound connection needs network prefix translation, as the residential dual-homing story for IPv6 is vaporware otherwise. NPT is used not for address space conservation, but to defer the decision about the correct source address to the router that has the knowledge of the correct policy.

And in this sense, IPv6 is worse than IPv4: there are too many people assuming no firewall and no NAT for IPv6, and designing their applications based on these almost-working (but de-facto now broken) premises. The correct premises are the same as for IPv4.


If you have two uplinks and running OPNSense (saying this as someone that does exactly this), you have a particular setup that you have clearly taken ownership of. If that breaks your experience with standard tech, that is part of what you traded off when you customized your setup as such.

IPv6 is strictly worse for this precisely because it is treated as a second class citizen. If it was the default in all the tutorials and we started naming IPv4 as the legacy protocol developers would know better.


IPv6 nat is a thing that exists and is used. IPv6 purists like to imagine it doesn't exist which is cute.


It’s extremely rare compared to v4, where it’s more common than not. I haven’t seen it with a single consumer ISP, and why should they use it?


Not up to you as a hole puncher to decide what a network uses. You just punch holes. IPv6 doesn’t allow you to skip NAT unless you are comfortable with your code not running everywhere.


> Honestly ISPs really missed an opportunity to essentially provide IPv6-only as a service

This is in fact how many 4G and 5G networks work today. I’m sending this reply via one right now.

For wired connections, many DOCSIS cable ISPs use DS-Lite, which is v6 only on the last hop as well.


There are two ways to interpret your comment:

1. Google is getting so much productivity out of their AI that they need fewer people.

2. Google is spending so much on AI they can’t afford to keep the people they need.


Or

3. Google is spending so much on AI that they can't afford to keep paying people, but they are ok with this because they are convinced the AI investment will replace the people at an eventual cost savings.


That seems to have been Dorsey's approach. The business has been stagnant, so cut the roster and bet big on some future returns from AI.


Google (and almost all other BigTech) is spending on scaling compute (data centers/securing power generation/chip contracts). My comment was not related to AI producivity and its impact on reduction of workforce. I believe a company spending nearly all its free cash flow on scaling compute (or borrowing money to do so) would have a different opinion on the economics of human capital.


I subscribe to the second point of view. Several companies fall in that bucket. Oracle comes to mind.


Imagine a society where one person produces all the value. Their job is to do highly technical maintenance on a single machine that is basically the Star Trek replicator: it produces all the food, clothing, housing, energy, etc. that is enough for every human in this society and the surplus is stored away in case the machine is down for maintenance, which happens occasionally. Maintaining the machine takes very specialized knowledge but adding more people to the process in no way makes it more productive. This person, let’s call them The Engineer, has several apprentices who can take over but again, no more than 5 because you just don’t need more.

In this society there is literally nothing for anyone else to do. Do you think they deserve to be cut out of sharing the value generated by The Engineer and the machine, leaving them to starve? Do you think starving people tend to obey rules or are desperate people likely to smash the evil machine and kill The Engineer if The Engineer cuts them off? Or do you think in a society where work hours mean nothing for an average person a different economic system is required?


For something to be deserved, it must be earned. What do these people do to distinguish themselves from The Engineer’s pets? If they are wholly dependant on him for their subsistence, what distinguishes him from their god?

To derive an alternate system you need alternate axioms. The axioms of our liberal society are moral equality and peaceful coexistence. Among such equals, no one person, group, or majority has the right to dictate to another. What axioms do you propose that would constrain The Engineer? How would you prevent enslaving him?


Hey, dude. How does someone earn value once automation does all the work? Earning the right to a share of the resources when resources are derived from automated labor is such a thoroughly pathological concept that I'm not sure we're communicating on the same planet.


Same way everyone has earned value from the beginning of time: negotiate with others. We are all born naked and without possessions. Everything we get, from the first day of our birth, is given to us by someone else. Our very first negotiations are simple, we are in turns endearing and annoying. As we grow older they become more complex. All I’m saying is that these interactions should be maximally voluntary and nonviolent.


> For something to be deserved, it must be earned.

Eeeeeerrrr, wrong! This is garbage hypercapitalist/libertarian ideology.

Did you earn your public school education? Did you earn your use of the sidewalk or the public parks and playgrounds? Did you earn your library card? Did you earn your citizenship or right to vote? Did you earn the state benefits you get when you are born disabled? Did you earn your mother’s love?

No, these are what we call public services, unalienable rights, and/or unconditional humanity. We don’t revolve the entire world and our entire selves solely around profit because it’s not practical and it’s empty at its core.

Arguably we still do too much profit-based society stuff in the US where things like healthcare and higher education should be guaranteed entitlements that have no need to be earned. Many other countries see these aspects of society as non-negotiable communal benefits that all should enjoy.

In this hypothetical society with The Engineer, it’s likely that The Engineer would want or need to win over the minds of their society in some way to prevent their own demise and ensure they weren’t overthrown, enslaved, or even just thought of as an evil person.

Many of my examples above like public libraries came about because gilded age titans didn’t want to die with the reputation of robber barons. Instead, they did something anti-profit and created institutions like libraries and museums to boost the reputation of their name.

It’s the same reason why your local university has family names on its buildings. The wealthiest people in society often want to leave a positive legacy where the alternative without philanthropy and, essentially, wealth redistribution, is that they are seen as horrible people or not remembered at all.


> This is garbage hypercapitalist/libertarian ideology.

Go on then, how do you decide what people deserve? How do you negotiate with others who disagree with you?

> examples above like public libraries

I agree! The nice part about all these mechanisms is that they’re voluntary.

If you’re suggesting that The Engineer’s actions should be constrained entirely by his own conscience and social pressure, then we agree. No laws or compulsion required.


We decide via a hopefully elected government.

These examples aren’t generally voluntary once implemented. I can’t get a refund from my public library or parks department if I decide not to use it.

The social pressure placed on The Engineer is the manifestation of law. That’s all law is: a set of agreed-upon social contracts, enforced by various means.

Obviously, many dictators and governments get away with badly mistreating their subjects, and that’s unfortunate, shouldn’t happen, and shouldn’t be praised as a good system.

I think you may be splitting hairs a little bit here and trying really hard to manufacture…something.


Slavery was (is) also an agreed upon social contract, enforced by various means. What makes it wrong? You clearly have morally prescriptive beliefs. Why are you so sure that your moral prescriptions are the right ones? And that being in the majority gives you the right to impose your beliefs on others?

What if you are in the minority? Do you just accept the hypercapitalist dictates of the majority? Why not?

Law is more than convention. What distinguishes legitimate from illegitimate law?

The only way for people who disagree axiomatically to get along is to impose on each other minimally.


Slavery(!?) was an agreed upon social contract? Like what in the actual are you talking about


You sure seem to know a lot about what people 'deserve' so I'm not sure I can hope to crack the rind of that particular coconut but I will leave you with this: Humans, by virtue of being living, thinking beings deserve lives of fulfillment, dignity, and security. The fact that we have, up until present, been unable (or perhaps unwilling) to achieve this does not mean it's not possible or desirable, only that we have failed in that goal.

Everything else, all the 'isims' and ideologies are abstractions.


> Humans, by virtue of being living, thinking beings deserve lives of fulfillment, dignity, and security.

You wanting people to have that doesn't mean that people deserve to have that. Fundamentally, no one deserves anything. We, as a species, lived for a hundred thousand years with absolutely nothing except what we could carve off the world by ourselves or with the help of small groups that chose to work with us. Everything else since then is a bonus (or sometimes a malus, but on average a bonus).

Also, as much as it sounds nice to declare such things as goals, deserved or not, it is indeed impossible, and probably not desirable, since, for starters, you can't even define what those things would be like. Those aren't actionable, they're at most occasional consequences of a system that is working to alleviate scarcity of resources.

Unfortunately, we're nowhere near that replicator.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: