Funny how there are a few domains where the same problem is solved over and over again. Probably an indication that a good solution is truly missing.
A long time ago I wrote a tiny audio synth. And yep, I wrote my own minimal GUI which had stuff like sliders, and most importantly knobs which general GUIs typically don't provide.
Now I'm working on a non-audio webapp, and I have a problem where a knob would be the ideal solution. Searched far and wide for a Vue knob, and what I found was not usable. It worked if you "rotated" it circularly, but it didn't work if you pulled it up/down like a slider. You might say "use a slider then if you want to pull it up/down". True, but a slider takes a lot of UI space which sometimes is very precious. More importantly, the slider resolution is limited to it's height, while a knob could for example use the whole screen height for a full turn.
> minimal GUI which had stuff like sliders, and most importantly knobs which general GUIs typically don't provide.
They don't provide knobs because they're a bad idea for input. Holding mousedown while drawing an invisible arc with the cursor is an awkward way to change a value. Sliders are generally better but still not great on interfaces with cursors (they're better on touchscreens). Skeuomorphism can be a useful visual design cue but it needs to be secondary to the input capabilities of the device.
Up and down arrow buttons(or plus/minus, or whatever makes sense for the value) are very compact and operable with a variety of input methods. Buttons need a way display the value but there are many options including putting them in the center of a ring of numbers like a knob would use.
In a good knob it’s not an arc but just clicking and dragging straight up or down that makes it move. A lot of VST audio plugins work like this and they get a lot of controls into a small area like this.
But then the gesture doesn't correspond to the interface at all. You could get the compactness plus a visual that corresponds by having something like a vertical slider that appears when you mousedown on the spot.
> They don't provide knobs because they're a bad idea for input
I like knobs because the distance from the center of that control (radius of your arc movement) can be a “fineness” or “coarseness” control. I think sliders don’t usually offer this.
That's why you don't use circular dragging for a knob, you use vertical/horizontal drags (or a combination of both, JUCE supports those for reference, I forget what it defaults to). I've only seen rotary dragging in a handful of products and it always sucks.
Arrow buttons are pointless when you're trying to dial in a value - which is the whole point of using knobs in audio software. They don't give you adjustable granularity or an easy way to continuously adjust - and if you do give it those functions, you've made a knob without the visual feedback of knob position. Or like you say, put it in the center of a ring of numbers, but again, you've just made a knob!
Yes, if you have to click up, then down, then up again, and down to nail down a "just right" value, those buttons would be more difficult but their operation would be apparent. You might be able to get to a "just right" value if mousedown can be held while the cursor is moved between the buttons but a slider is probably better.
The knob behavior you're describing is essentially an invisible slider. An invisible line is better than an invisible arc but it's still not good. Why not make it visible? A visible slider could appear on mousedown so there's something that corresponds to the gesture.
It's not non-intuitive. The most acclaimed UIs for audio software are littered with knobs, and they have to be. You're talking about dozens to hundreds of continuous parameters that need to be dialed in by hand in conjunction with the user's ears. It's also common practice to have a hotkey like ctrl to swap between coarse/fine tuning of the parameters, so you can dial it in without changing controls or letting go of the mouse - which is often done in real time during playback.
In your first example, you've just made a knob that doesn't look like a knob.
In your second example, you're confusing mouse position and gesture. Linear sliders map mouse position to state changes, rotaries map gesture. Another way of looking at it is that a rotary slider's state is decoupled from the input, and in doing so, is a much more responsive control. Not to mention, you have an infinite 2D area to traverse to change the knob value with respect to the widget itself, not a finite, defined path.
And the huge reason that linear sliders (at least off the shelf ones) are impossible to use for audio UIs is that 99% of your user's time is spent adjusting from the previous position, not resetting it to a new one. That means any behavior where a click resets the position (like clicking just past the current position) is broken. Not just because it's trash UX for the application, but because in audio systems jumping parameter values can cause artifacts, and mitigating them is not without cost in performance and fidelity.
> It's not non-intuitive. The most acclaimed UIs for audio software are littered with knobs
That doesn't make them intuitive, they're merely familiar to people already familiar with such audio software.
> hotkey like ctrl to swap between coarse/fine tuning of the parameters
Image editors and video editors have keys like that too, they're not related to knobs.
> In your first example, you've just made a knob that doesn't look like a knob
That's great! I preserved the functionality while making the action more apparent.
> a rotary slider's state is decoupled from the input
That's what I'm criticizing.
> Not to mention, you have an infinite 2D area to traverse to change the knob value with respect to the widget itself, not a finite, defined path.
The visual feedback of both sliders and knobs are both restricted to finite, defined paths. Whether the control of the input is restricted to that path depends on how it was designed. With some sliders, you can only move the control when the cursor remains without the bounds of the slider the iTunes volume control for example. With other sliders, once you've clicked on the control, the cursor can be outside the slider, I think this is the "infinite 2D area" you're talking about. The macOS menu bar volume and Sound Preferences sliders are examples of this latter behavior, I also found this example (you need to click the play button for the code to run).
> That means any behavior where a click resets the position (like clicking just past the current position) is broken
Yes, for what knobs are used for, you wouldn't want a slider control to jump to whatever point on its line was clicked on. This is another behavior that is found on some sliders (both iTunes and macOS Sound volume controls) but not others (the P5 example above).
On a more meta level, a whole industry decided after 30 years of iterations that knobs that work like invisible sliders are the best controls, and you - who by the looks never had any extensive use of audio software - are claiming that they are all wrong, because of some philosophic argument. You might be true, you might be the rare visionary who sees a better way of doing things where other cannot, but the burden of proof is on you.
> You might say "use a slider then if you want to pull it up/down". True, but a slider takes a lot of UI space which sometimes is very precious. More importantly, the slider resolution is limited to it's height, while a knob could for example use the whole screen height for a full turn.
Perhaps the solution is to make the slider pop up if you press a button?
In the early days of the iOS App Store I worked on a music iPad app. Through many hours of user testing and researching prior art, the best solution we came up with was detecting the trajectory of the user input and supporting both vertical dragging and dial-like rotation. It took a shitload of trial and error fine tuning, but it eventually turned out pretty well.
I think many music apps these days give you the option buried somewhere in settings.
That’s what I did. Made it slide by default (because I hate trying to twist a virtual knob). I briefly had a translucent rectangle popup when the control was in slider mode but it turned out not to be helpful as linear dragging seemed to make sense to everyone and I only had like eight options in total.
The one nice innovation I had was automatically detecting horizontal or vertical drags - useful for things like pan controls where a horizontal motion makes more sense.
Is it better to write one own's GUI in this case? I believe most toolkits have a way to add new controls, so it should be less work to add a knob to GtK, Qt or any other toolkit.
Then you hit the second requirement: complete bitmap theming, many times of a skeuomorphic kind. This is extremely difficult to do in something like Gtk/Qt.
1. Motif came first. Among people who knew what Motif is, the look of Windows 95 was widely recognized as being very similar to Motif to the point of being a knock-off.
2. See the diamond-shaped radio buttons in the second example. :)
Oh, I didn't see the second screenshot. Yeah that one is more like what I remember Motif looking like. The first one though is definitely going for the Windows 95 look.
If you want this functionality, I recommend not using it as-is, given the security vuln GitHub is currently reporting. Rather, anyone has my permission to copy the code verbatim into your project. It's a pretty simple gem.
To be honest, even the coarsest-possible permissions of "can do I/O" vs. "can't do I/O" would be exceedingly effective at stymieing these sorts of attacks; all malicious software of this sort needs to do I/O at some point, and relatively few libraries actually have a good excuse to do I/O (though logging might be thorny).
That said it seems easier said than done to impose those sorts of restrictions on a per-dependency basis. Attempts to statically verify the absence of I/O sounds like a great game of whack-a-mole, and I don't know how you'd do it dynamically without running all non-I/O dependencies in an entirely separate process from the main program.
> few libraries actually have a good excuse to do I/O (though logging might be thorny).
Yeah, logging would be tricky...
Maybe a "logging" capability could be created. Separated from other I/O.
Such a capability would be weird, and nonstandard, and messy, cutting across several several abstraction layers. But if pulled off, it might be worth the effort.
That's solved in similar frameworks by separating open and read/write. You open (or inherit from somewhere) a logging socket, drop the open privileges, retain the permission to write to the log socket.
or apparmor, selinux, grsec, tomoyo, ... But those systems can't integrate into scripting language per-library use case without some serious thread / IPC overhead.
These others can achieve what's intended, but the entire flavour of the discussion is a dead ringer for pledge's purpose and interface, which is much simpler and very much internal to the software (a self-check of sorts).
Haskell indirectly solves this by separating `trace` (a form of logging) from IO (trace is a procedure that logs function call while all other IO must be contained in an IO monad).
> That said it seems easier said than done to impose those sorts of restrictions on a per-dependency basis.
Isn't this the sort of thing type inference is made for? Along with return types, functions have an io type if they're marked (std lib) or if they contain a marked function. Otherwise they have the pure type.
Doing this usefully does require more than just “does IO” — e.g. does that mean it can load another module, read a list of too-common passwords, write to a log file, or read your ~/.aws/credentials? Similarly, does allowing networking mean it can talk to anything or just a few well-known hostnames and ports?
This isn’t to say that it’s a bad idea but there are a ton of details which get annoying fast. I know the Rust community was looking into the options after the last NPM hijack was in the news but it sounded like it’d take years to make it meaningfully better.
> running all non-I/O dependencies in an entirely separate process from the main program.
Maybe that's not such a bad idea. This "strong_password" thing is written in Ruby, a few milliseconds delay is probably not noticeable anyway and vastly preferable given the security implications.
The design of macOS and iOS has been moving this way. Many of Apple's first-party applications and frameworks have been broken down into backend "XPC services" that (attempt to) follow the principle of least privilege[1]. Each service runs in a separate process, the system enforcing memory isolation and limiting access to resources (sandboxing).
It's a good idea on paper, but has caveats. Every service is responsible for properly authenticating its clients, and needs to be designed so that a compromised client cannot leverage its access to a service to elevate privileges. Sandboxes are difficult to retrofit onto existing programs. The earlier, lowest-common-denominator system frameworks were not originally written with sandboxing in mind. There are numerous performance drawbacks.
For Apple ecosystem developers, XPC services are also how "extensions" for VPN, Safari ad blockers, etc. are written, for a mix of security and stability benefits.
Though funnily enough, as Apple has pursued these technologies, many HN commenters have decried the walls of the garden closing in.
Hm, interesting. One way to solve this would be to have a language with a very rigid import system - it should be _impossible_ for a library to use a module it hasn't imported, even if that module has been loaded elsewhere in a process. This is probably harder than it looks, and many languages have introspection features that are incompatible with this goal.
With a rigid import system, each library would be forced to declare what it's going to import (including any system libraries), and then you could e.g. enforce a warning + confirmation any time an updated dependency changes its import list.
It doesn't prevent you from getting owned by a modified privileged library, but it's better than the current case. Unfortunately, it probably requires some language (re-)design to fully implement this approach.
> With a rigid import system, each library would be forced to declare what it's going to import (including any system libraries), and then you could e.g. enforce a warning + confirmation any time an updated dependency changes its import list.
Which means you would get warnings on pretty much any functional upgrade of most dependencies, which would make the whole system useless from a security point of view.
In theory, a point release of a library really shouldn’t be requiring new permissions, and you shouldn’t be randomly upgrading your code to newer major versions without checking for compatibility anyway.
Why should a functional upgrade of a dependency introduce new dependencies anyway? A library that sets out to do a particular thing shouldn’t grow new features that require new capabilities willy-nilly.
> Why should a functional upgrade of a dependency introduce new dependencies anyway? A library that sets out to do a particular thing shouldn’t grow new features that require new capabilities willy-nilly.
Why not? I've often done upgrades with the sole purpose of replacing questionable, hand-written code with external dependencies I've discovered that do the same thing, but better (more features, more tests, more eyes on the code, more fixed issue reports than my often-closed-source code). From string parsing to networking, this happens a lot. The external contracts of my libraries don't change a bit, so why waste a major version? "I'm using someone else's code instead of what I YOLO'd myself" seems like a poor reason to rev a package version--and even if it's not, where do you draw the line? Cribbing code from StackOverflow?
> Hm, interesting. One way to solve this would be to have a language with a very rigid import system - it should be _impossible_ for a library to use a module it hasn't imported, even if that module has been loaded elsewhere in a process. This is probably harder than it looks, and many languages have introspection features that are incompatible with this goal.
If you look at dependencies as black-boxes that contain their own transitive dependencies, then sure, any given "root-level" dependency of sufficient complexity might end up requesting every permission.
On the other hand, if each dependency in the deps tree had its own required permissions, and you had to grant those permissions to that specific dependency rather than to the rootmost branch of the deps tree that contained it, then things would be a lot nicer. The more fine-grained library authors were in splitting out dependencies, the clearer the permissions situation would be; it'd be clear that e.g. a "left-pad" package way down in the tree wouldn't need any system access.
On the other hand, it'd make sense if dependencies could only add new transitive dependencies during "version update due to automatic version-constraint re-evaluation" if the computed transitive closure of the required permissions didn't increase. Otherwise it'd stop and ask you whether you wanted to authorize the addition of a dep that now asked for these additional permissions.
It's also worth noting that under this system, if you trust a large library like React, but don't trust its dependencies, you might still trust that React is sandboxing its own imports correctly -- and then you could "inherit" React's permissions and be fine without overriding anything.
If you're really worried, then you still could go over your entire tree and override the default settings. But there's nothing that would mean you would be required to do that.
People are thinking about this using the phone/website model, where permissions are only applied at one level. With dependencies, whatever giant framework that you're pulling in could be using the same permissions system to secure its own dependencies, which would make you significantly safer.
Under the current system, you have to hope that none of the authors in your dependency chain make a mistake and get compromised. If everybody can sandbox anything, then you only have to hope that most of those authors don't make a mistake.
If somebody attaches malware to a dependency of a dependency, and if even one person along that chain is following best practices and saying, "yeah, I don't think this needs a special permission", then they've likely just prevented that attack from affecting anyone else deeper down the dependency chain.
Sandboxing in package managers is something that could actually scale pretty well; much better than it does for websites/phones/computers.
That seems like a strategy that would cause significant slowdowns and hassles in development.
High-level (i.e. consuming a lot of dependencies at a lot of levels) tools would simply apply a "allow everything" dependency policy rather than deal with tons of issue reports from people who wanted to import the high-level library in a less-than-root-permissioned project.
Additionally, lots of upgrades do increase the dependency surface. Resolving local usernames is a pretty fundamental thing a lot of dependencies would need. Now consider the libc switch from resolving names via /etc/passwd to resolving from multiple sources (including nslcd, a network/local-network service). If every dependency up the tree adopted a "lowest possible needed IO surface" permission model and then that change happened, it would be hell to pay: maintainers would take the shortest path and open up too many permissions; maintainers wouldn't upgrade and leave some packages trapped in a no-man's-land; or maintainers would give up on pulling in prone-to-changing-permissions dependencies, leading to even more fragmentation.
Its biggest selling point is that a lot of capability safety could be inferred in packages without the package author separately specifying capabilities.
The basic idea is to disallow the remaining impure escape hatches in Haskell in most code, requiring library authors of libraries that do require those escape hatches (e.g. wrappers around C libraries) to assert that their library is trustworthy, and requiring users to accept that trustworthy declaration in a per-user database.
It actually was very promising because the general coding conventions within Haskell libraries made most of them automatically safe, so the set of packages you needed to manually verify wasn't insane (but still unfortunately not a trivial burden, especially if your packages relied on a lot of C FFI).
Unfortunately I have yet to see it used in any commercial projects and it seems in general not to get as much attention as some other GHC extensions.
I know this is about ruby, but it's worth noting that this kind of thing would be solved by effect systems, e.g. Haskell's IO type. If IO isn't part of the signature, you know it's cpu only. Furthermore, you can get more specific such as having a DB type to indicate some code only has access to databases rather than the internet as a whole.
While that might be true, you are not going to switch the world to program in Haskell.
We need a solution which also works for most used languages, JS/C++/Java/Python..., which suggests that it should be done at a higher level, maybe with OS involvement somehow.
The .NET Framework 1.0 included "Code Access Security" which included mechanisms to authenticate code with "evidence" (as opposed to traditional 'roles') and the apply permissions similar to your example: DnsPermission, FileIOPermission, RegistryPermission, UIPermission, and so on.
Unfortunately, the architecture was too complex for most developers and fell to the wayside. It was finally removed from the 4.0 Framework after being deprecated for some time.
Linux has seccomp for the same purpose. The most restrictive mode of seccomp permits only read, write and exit, which is good for a jailed CPU-only process (read/write commands from a pipe and exit when done - no opening new files or sockets).
Couldn't you theoretically shove all of your untrusted "non-I/O" libraries into a Service Worker? They wouldn't have direct access to the DOM or network I/O that way. It would involve writing some glue code, but perhaps it's worth trading that off for increased "security" (trust)?
EDIT: never mind, looks like I was mistaken about the network i/o part of this... Might be interesting to have a browser-level "sandboxed service worker" for this purpose though...
The skeptic in me thinks that it's never going to work in practice due to 'worse is better': Any system with the 'I/O vs no-I/O' system will have more friction than one without it, and there is no measurable benefit until you get hacked, so most people will not use it (or declare everything as I/O).
we can't retrofit this onto an existing community and code base... see Python 3 for details. People just won't make extensive changes to their code base is they can't see an immediate, tangible, benefit.
You can't have it all? This quote from the article makes it sound like you can barely have anything:
> In fact, recent tests of newly launched commercial 5G networks in the United States are showing that millimeter wave signals are not traveling more than 350 feet, even when there are no major obstructions. They are also not penetrating walls or windows, making indoor coverage difficult.
350 ft with no walls or windows in the way?? So you'd need about 4 antennas just to cover the vertical height of the Empire State Building? How is this considered in any way viable, even in urban areas. Why not just build out a WiFi mesh network instead if the numbers are that poor.
Really hoping there are some important details missing here.
I don't really know the trade offs. Maybe they believe a longer short range would be best?
There's a vehicle test track with Ericsson 5G in Sweden (Astazero) where you can stream while driving. I don't know how many radios they use. There are more installations around the globe for showcasing that.
There has been trails of multiple Gbit link to race cars, but that's more or less lab tests. With more cars you will need to share the spectrum.
> Really hoping there are some important details missing here.
The laws of physics dictate that the only way we can have large bandwidth AND large number of simultaneous users is to have tiny cells which cover only a few square meters.
5G, and future Gs will put these micro cells everywhere, on top of each street light, buried under the sidewalk...
Just like in a building every apartment has it's own private Wifi and all of them have fast Wifi, while at the same time all of them have mediocre mobile connection.
It could have some serious niches. For example sensor/camera networks on an airport site. Or as a cheaper alternative for short range microwave fixed links.
Actually reactivity in GUI (maybe declarative GUI programming is a more accurate form?) is also "the old way" in the GUI development. I mean it is already done a decade ago, even before react. Try QML. The view is automatically updated when a signal is triggered or its state changes.
You will notice that it is surprisingly simpler than React.
React got so popular because React implemented it very nicely in the web browsers, not because it brought new concepts.
This is a far cry from React where you just use language-native data structures directly (you only need to do mutations in specific ways).
As far as I can tell, you also can't just use native language constructs like functions/ifs/.map, etc. to compose the UI elements. Instead you have containers like Repeater.
It seems very different to me compared to React, like built around a significantly different philosophy.
Your comment seems to just be highlighting the fact that inspecting types at runtime is non-idiomatic at best in C++ or similar languages, so it would make sense to require a specific data structure or format.
Even defining what a "native" type is seems like it would be fraught. I guess the c++ way would be to accept begin and end iterators - but of what type?
I don't think I understand how RTTI/reflection relates to this.
When I imagine how a C++ implementation of something like React would look, my first thought is a bunch of functions which take whatever custom types they need (props), and return a tree of UI primitives (analogous to DOM element representations). Nothing that would require inspecting types.
And that works in other languages because you can iterate through trees and properties without needing to specify how to iterate through them or how they are stored.
Think about it. How do you iterate the tree? What even is a tree? The typical c++ answer depends on templates and iterators within the template. Which means they wildly change at compile time. It is clumsy for a shared library to deal with this, or, say, bind properties from an XML file or similar parsed at runtime.
Hence, it would make sense for a c++ solution to impose its own format or base class for trees or models. (Perhaps a template could help to wrap it.)
There may be some limitations and some things may need to be done differently because of C++, but from a brief look it should be basically possible. The UI library doesn't need to iterate through props, those can be a single opaque structure from its point of view.
I tried doing something similar in Kotlin a few years back and from what I remember there wasn't any reflection used for that part. https://github.com/peterholak/mogul
The .NET Framework, specifically the WinForms and the WebForms did that. Of course you could change everything manually, but DataBinding was a core concept which relied on changing data to change the contents of the GUI.
WPF still has the same kind of approach and to be terribly honest, Silverlight, as hated as it was, did it before React.
Yet it allows to build native applications where UI structure is defined by HTML, style by CSS and reactivity by script or by Rust (or by C++, Go, Python, etc.)
1. FlexBox breaks existing CSS Box model that mandates that dimensions of the element are defined by its width/height properties.
2. FlexBox introduces 12 (sic!) new properties in already overcrowded CSS property map. But as demonstrated by Sciter that can be achieved by just one property `flow` and flex units.
3. Flexbox and Grid are conflicting in the sense that the flexibility (as a feature) is defined in two different ways: separate properties (FlexBox) and 'fr' units (Grid) - overall CSS architecture becomes a zoo. Yet CSS has that famous 'auto' value that in some cases is just 1fr ( 1* in Sciter terms).
4. FlexBox is applied through 'display' property that is conceptually wrong. 'display' specifies how the element itself is replaced among its siblings but flexbox is rather about layout of element's children. These two entities are orthogonal. Example:
display:list-item; or display:table-cell; cannot be flexboxes. Just because of architectural specification error but physically nothing prevents <td> to use flexbox layout for their children.
5. Flexbox arrived too late. We started talking about the feature 12 years ago on www-style WG at W3C. And all these years Sciter already used flexibility (the must for desktop UI).
Not the parent, but in my view it is basically a 1-dimensional subset of Grid, which is great for mostly-1D website design.
But when it comes to making applications (which often use the whole screen, with tabs and whatnot, instead of being scroll-oriented), it makes some things either really difficult, either still requiring custom code to get it just right, or having a handful of wrapper elements for relatively simple layouts.
Don't get me wrong, it's _miles_ better for web layout than the old systems, but since Sciter seems to be focused on application-style layouts, it makes sense to prefer a layout system that can better represent common patterns in that domain.
Also, if you plan on go responsive, the required flex wrappers kind of hold you back when you need to reorient a bunch of your layout to fit a vertical screen (or horizontal, if you're doing mobile-first), while Grid's `grid-area` is beautifully flexible in that regard.
TL;DR: If you're gonna have to have Grid, then just having Grid is perfectly fine as it can do everything Flex can and more.
I know about revery[0] and cljfx[0], though I haven't had the chance to use them in anger. These both seem like very nice approaches to me - I'm partial to cljfx because I've been utterly spoiled by the clojure/script development experience, but I've also come to enjoy the typescript environment and revery seems to another step along that road.
Fn-fx is a more complex beast and no one seems to really know how it all works internally except the creator who isn't maintaining it.. so unfortunately it's in a semi unmaintained state. But the interface was a bit cleaner when I tried it
cljfx is actually inspired on fn-fx, and presents a much cleaner and more usable api, just by comparing the docs. I'm pretty sold on data-based apis instead of macro-based ones, though, so maybe it's just catering towards my preferred style of development.
oh okay. you sound more knowledgeable than me, so thanks for the input. I'll stick to cljfx then. I found it clunkier to mix with Java code, so I was considering switching back to fn-fx
I've used WPF extensively 10 years ago. It's nowhere near as easy to create a complex GUI as Vue/HTML/CSS. And that just for functionality. If we talk about looks, WPF is very clunky to theme.
I happen to have the opposite view, specially with Blend and component libraries into the mix.
CSS had to get WPF grid design in order to not be a poor table sibling in what concerns layouts.
While WPF is backed by DirectX, CSS requires playing with Z order, so that some browsers might eventually put the rendering into the GPU, but one needs to take care because space is limited.
While I can render anything down to pixel level control, if I wish to do so, I am still waiting for Houdini and worlets to actually become available.
And the whole template language with events and theming for low level customisation of control behaviours? Nowhere to be seen.
Using Blend feels to me like using Word to create HTML pages - the resulting output is horrible.
CSS layout was indeed a struggle many years ago, but now with flexbox I never failed to put stuff exactly where I wanted, and I barely understand it. And the new shiny thing, CSS Grid, is supposedly even better at controlling layout.
Similarly to many MS technologies from that era, WPF/XAML is not half bad at the core, but so verbose. Probably intended to be primarily generated by tooling.
Having been a Turbo Vision and OWL user, which lacked UI designer support, following up with VB and Delphi/C++ Builder, I never understood the macho attitude of doing GUI design manually without tooling support.
Any framework that comes with tooling support out of box is a plus for me.
Hence in what concerns Web, I care mostly about WebComponents, CMS middleware with page designers and SPA frameworks like Angular.
I reckon for designers are excellent for forms. The further you go beyond a data crud app, the less benefit you get from visual designers, because more stuff will be animated, contextual, custom, etc.
CRUD apps is a really big application space to cede, though, I totally think tooling is worth it.
It depends very strongly on how you used WPF. It had many footguns, due to a foolish decision to seemingly support the expectations of winforms developers.
Microsoft and others have MVVM frameworks, such as WPF, which have reactive data binding. Qt Quick also, but I haven't tried it myself. That said, it is not that similar to React or Vue because the "DOM"s are so different.
Try to use WPF from anything other than dotnet. It will be hard, given that you will have to hack its main loop to work with an external code. I'm not sure if that was ever done.
I've been in involuntarily involved in webdev for quite a few years. I will give everything to get that quick and easy manner of building GUIs in GTK+ to the web.
I wrote my first GTK+ program in my high school years in 2014-2015, and I can't think of any native toolkit ever approached the ease of work of GTK.
Been working on Windows side with every Microsoft's take on GUI library, Android, tried Qt (before QML,) and am involuntarily dealing with all those Web toolkits on almost every project involving web.
Microsoft very actively works on React Native for Windows, and last I checked there were decent RN desktop ports for Linux and MacOS. So that's one potential option.
JavaFX actually has a lot of reactive elements in its design. Basically every attribute of any GUI element are an “observable” type which allow you to register handlers for doing stuff when the internal value changes.
There seems to be a perception that anything running on the JVM is going to be slow and look bad. Considering I work with Java every day in a gigantic Swing application, (IntelliJ IDEA, if you didn’t guess) and it has great performance for all the things it’s doing for me, that seems to be an old holdover from early days of Java. Also enterprise apps, which of course are going to be bad and slow. At least the JVM has real threads for background work which don’t take another 100MB of RAM like an electron background worker :)
Yea, its hard to wash out a stain like that, in general, JIT:d JVM based applications performs as good or better than AOT application, but do still need more memory.
The “or better” tagged into JVM applications really needs to go away. Realistically, most JVM applications do not have as good performance as native ones. Yes, it’s possible to run better, but it’s very hard to create code that JITs to something faster than a thing written in C/C++/Rust/etc. You have to basically write code that looks like C to make it run as fast or faster, because there’s real overhead that comes with a lot of things in the JVM. You don’t just write standard Java or Scala and suddenly it’s as fast as C++, in general. It’s still heckin blazing fast, to be fair, but not AS heckin blazing fast. (As long as you aren’t just going insane with objects, at least, such as huge lists of integer objects)
Microsoft. Their XAML apps are pretty much web apps done right, ie not javascript and a more powerful layout system -- and everything on the gui part is reactive.
Why do you think the strongest of the strong bacteria are found in hospitals? Because they do a lot of cleaning and disinfecting. The weak are purged, the few who remain don't have to compete with the weak on other axes, thus they can concentrate on developing resistance.
In your home, where you don't clean as strongly and as frequently, there is a wide variety of bacteria who compete against each other, and can't spend too much on developing resistance.
> Why do you think the strongest of the strong bacteria are found in hospitals?
I presume you mean strong against antibiotics and sterilisation. The evolved bacteria are likely weaker along other dimensions (that are less important for survival in a hospital setting). Some evolution is compromises, improving X has cost Y.
Aside: the mega-plate video is an unbelievably good demonstration of bacterial evolution: https://m.youtube.com/watch?v=plVk4NVIUh8 especially note the tree at the end showing how many different mutations evolved.
Could you please stop posting unsubstantive comments? Also, could you please stop creating accounts for every few comments you post? We ban accounts that do that. This is in the site guidelines: https://hackertimes.com/newsguidelines.html.
HN is a community. Users needn't use their real name, but should have some identity for others to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
Are you talking about Samsung Fold? Because they didn't sold not even 1 of that.
You should be careful what you wish for. If companies were fined billions of dollars for faulty products, in a couple of years you'll have nothing to complain about. And not because everything will be amazing.
Just imagine if Microsoft/Apple/"Linux" was fined 1 billion for every major flaw in their OS. There would be no OSs any more, because they would all be bankrupt and nobody would dare selling anything remotely new.
Are you personally willing to accept $1 mil liability for any major flaw in the software you wrote in the past?
False dichotomy. There's certainly a middle ground between allowing corps to get away with whatever they want with a mere slap on the wrist, and fining them into bankruptcy.
> Are you talking about Samsung Fold? Because they didn't sold not even 1 of that.
Eh, true - but only because the test units sent out to media outlets and tech journalists started breaking[0].
> You should be careful what you wish for. If companies were fined billions of dollars for faulty products, in a couple of years you'll have nothing to complain about. And not because everything will be amazing.
> Just imagine if Microsoft/Apple/"Linux" was fined 1 billion for every major flaw in their OS. Are you personally willing to accept $1 mil liability for any major flaw in the software you wrote in the past?
I don't think that's what the parent commenter was asking for, they specifically called out the marketing claims. Which, I think, is completely fair. re: folding and waterproof devices, Samsung intentionally marketed those features of the devices and well, they don't really work. True, they never actually sold the folding devices, but it was because of the feedback from testers, not something their internal quality control caught.
I definitely don't think every software/hardware company should be fined $1M-1B for major flaws, but if you're deliberately marketing a feature which either doesn't exist (salt-waterproofing) or doesn't work (folding) either intentionally or due to QA/QC negligence on your part... I definitely think you should be fined for misleading consumers.
I won't comment on their batteries - Samsung never marketed their phones based on how safe their batteries are.
But Samsung has been repeatedly caught using stock photos from a DSLR and representing those as images captured by the cameras on their phones[1]. I'd certainly say they should be fined for that - that's intentionally misleading consumers re: the capability of the product.
You're right, I definitely embellished things more than necessary there. I was just done reading that other article on HN frontpage today regarding what looks like gross negligence @ Boeing and I was all frothy at the mouth ;)
I don't think the end result would be clever and resourceful people being "afraid" of regulatory backlash if they screw up. Not if we demand a reform that is careful to avoid such an outcome, that is.
The way I see it, corporate systems around the world are being governed in such a way that often leads to directly rewarding bad behavior.
Humans can innovate at incredibly complicated levels. We have plenty of examples of that. I think we just need to find ways to ensure that the resulting organisations don't grow to such a size that they have an entire level of executive leadership that seeks to relentlessly drive profits at the cost of all else.
Maybe that means we as a species actually move a little bit slower sometimes? Would that be so bad? I'm probably suffering some extreme cognitive bias, but I really think that we'd be better off if that were the case.