Hacker Timesnew | past | comments | ask | show | jobs | submit | jkh1's commentslogin

You can track things in live cells with MINFLUX, one of the recent super-resolution techniques coming from Stefan Hell's lab. Edit: add MINFLUX review: https://arxiv.org/pdf/2410.15902


In my field, trying to reproduce results or conclusions from papers happens on a regular basis especially when the outcome matters for projects in the lab. However, whatever the outcome, it can't be published because either it confirms the previous results and so isn't new or it doesn't and no journal wants to publish negative results. The reproducibility attempts are generally discussed at conferences in the corridors between sessions or at the bar in the evening. This is part of how a scientific consensus is formed in a community.


Care to share which field is this?


cell/molecular biology


> There's no good alternatives on Linux

Maybe I misunderstand what this refers to but there are RDP software for Linux. I've used remmina [1] on Linux for a few years (now I am using VMWare Horizon at work).

[1] https://remmina.org/


I tried a few solutions a few years ago to see if there are any alternatives to MS RDP, but at that time i found them all to be lacking. Mostly remote screen was not as responsive as RDP.

A simple test was to play Youtube video on remote machine and compare results of all Linux solutions with MS RDP. None came close.

I hope things changed in past few years if anyone knows?


Me and a few people made ourselves some weird "VDI"-like workstations as VMware virtual machines, not that long time ago. Those run Ubuntu. I was fine with plain ssh, but my boss pushed his setup to the limits and used that as his main computer. He even gave away company laptop and used only private one to RDP into that (through some corporate vdi auth, but still RDP).

On that vm side it was xrdp. He had multimonitor setup working, sound, video, he even used zoom daily.

Client was windows, it passed through camera and so on.

I could not tell I was using something remotely, even if I was just using mouse and keyboard, sitting directly in front of those monitors.

One gotcha was that these were very fast VMs on fast network.


There are decent clients, but no decent rdp server software yet that I've found.

That said krdp looks promising, but I haven't gotten it to work yet.


What was the problem with Xrdp? I use it every day from both Windows and Linux clients and haven't had issues.


Xrdp can't give you a console interactive session.

On Windows I can log into a local session on my console. Then lock the screen and walk away.

On another machine anywhere on the planet, I can then log back into that same local session, and do a bunch of work. Locally, the screen remains locked and secure.

Then I can eventually walk back to that machine, and log back onto the local console and continue exactly where I was.

There is no performance penalty for this - my local session runs exactly as it normally does. The remote session is very fast and efficient.

There is no way that I know of on Linux to replicate this: you have to commit to either a session being "virtual" and running through remote desktop (x2go is the best at this) or "local", in which case you get a less performant screen-scraping session but worse while you're using the session remotely the screen is unlocked and visible and usable by anyone with local physical access (i.e. in a shared office perhaps).

It's a problem which should be treated as a massive, ongoing embarrassment for desktop linux, and a substantial impediment to any notion of wider enterprise adoption: I've been in office's where the standard way everyone worked was remote desktop'ing to their office PC workstation via a VPN and seamlessly going from remote to local sessions was a crucial part of the experience.

Really we probably just need to solve the screen lock problem: but getting the experience (i.e. resize, resolution and app sessions) working properly is also important.


Interesting, my problem was exactly the opposite - I wanted a multi-seat access system with ability to work locally independently of other sessions which is somewhere between painful and impossible to set up on Windows. But I see your point, this is something I hadn't considered. Maybe attaching to the remote desktop system from localhost could work, although the slight decrease in performance would probably drive me nuts.


That's a licensing issue - Windows 10 is perfectly capable of doing this, but it's tied to having a Windows Server instance and remote desktop licenses and blah blah blah <proprietary things>. But if you have all that it does work! And it works the same way - multiple users can come back to the same console and resume their session or whatever.

RDP on Windows works really, really well - to the somewhat absurd outcome of course that because of that, it is easier to run Linux locally and Windows remotely when you can use Remmina to connect to the Windows machine so easily.

But the remote Linux experience really should be better.


VNC-like performance, so functional but not really what I consider usable in day-to-day work.

Also, IIRC, it didn't support seamless local vs remote sessions.

That is, if I started a session remotely, I couldn't resume it when I got back home. Or vice versa, if I started the RDP server on my logged-in session, I couldn't start a new session remotely in case the desktop rebooted (updates, power loss etc).

With Windows I just log in regardless.

Granted it's been a couple of years since I last took the rounds to evaluate the field, including Xrdp, so I'll give it another whirl soon.


Running locally is sometimes necessary, e.g. you don't want to send sensitive data to any random third party server.


Both Ollama and Huggingface distribute models. The latter sites have model hosting services too, but that isn't the only way to use models from there.



How many workflow management systems do we need? Over 300[1]. If that's not reinventing the wheel.

[1] https://github.com/common-workflow-language/common-workflow-...


Pretty much everyone underestimates the complexity of workflow scheduling.

This means like 99% of tools have had various random limitations and quirks that make them not applicable to a lot of use cases.

This is something that people using these tools mostly never realize (and so complain about the "reinvention").

The folks I know who tried to implement a pipeline tool are often a bit more aware of the challenges and how hard it is to make something that is really general.

I say this as someone who evaluated a dozen tools, and finally extended an existing tool to fix some limitations (Luigi, with our SciLuigi extension), and finally developing our own tool (SciPipe).

It has gotten better, and a tool like Nextflow is pretty generic these days, although they also might have limitations. For example, before DSL2 we needed re-usable modules, which is why we developed SciPipe, which otherwise has a very similar scheduling mechanism to Nextflow (Dataflow/Flow-based).

Still today, I'm having mixed feelings about using extremely complex tools that are dependent on a single organisation to keep updating. Not being able to easily debug execution and a few other things, which is why we wanted a simple library that we could understand ourselves and run through a debugger. (And it didn't hurt that we could get complete audit logs per output file, which can be very useful both for provenance and debugging, and is not found in almost any other tool.)

Just to give some examples of why someone might still entertain thoughts about developing separate tools.

All in all, the widely used ones like Nextflow (and Snakemake) are great tools. They just aren't optimal for every usecase and situation.


The problem is that ISO standards are not open.


What seems to help in the life sciences is the existence of public repositories. These could be replaced by portals that collect info on data hosted elsewhere. But the main advantages are that they provide clear, well known places to start looking and they curate, standardize and organise the metada to make it searchable.


In the life sciences there are dedicated structured repositories. These are searchable by keywords and often crossreference each other. They are the goto places for finding data.


In biology we now routinely produce datasets in the multiple terrabytes range. It can easily be n = 3 x 10 TB such as for example imaging 3 fly embryos by light sheet microscopy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: