• 0 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: July 9th, 2023

help-circle

  • This article was posted here as well. Here’s the comment I left there:

    This article seems either very naïve, or fairly disingenuous. Signal is not precariously installed on one box, and if that box goes down, the service dies. It is distributed. It’s running on many machines within AWS, and technologically, there’s no reason it couldn’t be in multiple regions of AWS, or even spread across multiple clouds (e.g. Azure, Google Cloud, Oracle, etc), to improve resiliency to outages. The only way in which it is “centralized” is that there’s one foundation in charge of the whole thing. Are there drawbacks to this? Yes. But self-hosted, distributed, mesh/relay chats also have drawbacks. Servers in the mesh go down, people don’t keep things updated, they don’t necessarily connect to every other instance creating disjointed pockets, etc.

    Also, to say “we don’t need the internet” we need “mesh networks” is odd… The internet is a mesh. Hence “inter.” Anything else is just a smaller version of the same thing, again with some benefits and some drawbacks.

    Fighting a (relatively) successful platform that champions privacy and security, seems like a bad thing to do, when those are the same primary goals of the platform you support. It would be better to discuss the merits and use cases of each, and beat the privacy and security drum together.


    In my opinion, this article is just spreading FUD. Signal is not perfect, but it’s pretty good. And when there’s an outage, we know why, and we know there’s a team working on it. With a federatated service, it may be harder to take “the whole thing” down, but that doesn’t mean nodes don’t go down, service isn’t disrupted, etc. Scaring people away from a (usually) reliable, open platform, that has been audited, that actively advances security research and keeps it’s platform secure against emerging threats, is counter productive. It’s just going to keep people using SMS and WhatsApp.





  • I used Windows growing up, switched to Linux in highschool on my personal machines, and was forced to use Mac for nearly 10 years at work. In my experience, they all have problems, and the worst part is always early on. After you’ve used them for a while and have gotten familiar/comfortable, the problems get easier to deal with, and switching back (or on to something new) becomes more daunting/uncomfortable than dealing with what you have. So in that sense, yes, it will get easier.

    Also, as hardware ages, you often see better support (though laptops can be tricky, as they are not standardized).

    Keep in mind, when you use Windows or Mac, you’re using a machine built for that OS and (presumably) supported by the manufacturer for that OS (especially with custom drivers). If you give Linux the same advantage (buy a machine with Linux pre-installed, or with Linux “officially supported”), you’re much more likely to have a similar, stable experience.

    Also, I’ve had better stability with stock Ubuntu than its derivatives (Pop!_OS and Mint). It might be worth trying an upstream distro, to see if you have better stability.



  • Having daily driven Windows (~6 years growing up), MacOS (8+ years for work), Linux (~18 years on personal and (some) work machines), and ChromeOS (~2 years, on a cheap Chromebook used while I was traveling places I didn’t want to take an expensive machine), if my options were Windows, MacOS, or ChromeOS, I would 100% take ChromeOS. Even on cheap hardware, it was a better user experience than the others… Though I will caveat that with: when I had to do work that required heavy lifting, I remoted into my Linux desktop. But that was a hardware limitation, rather than a software limitation.

    For people who know what they’re doing, I recommend traditional Linux. For those who don’t, I recommend ChromeOS. Mac and Windows are both also run by mega corps, they’re all spying on users… at least ChromeOS is performant and stable.







  • Raster images do not need to be rendered - see Rendering:

    Rendering is the process of generating a photorealistic or non-photorealistic image from input data such as 3D models…Today, to “render” commonly means to generate an image or video from a precise description (often created by an artist) using a computer program.

    Note that “render” is a fairly generic term, and it is sometimes used like “render to the screen,” to just mean to display something. Rasterisation may be a better term to use here, since it only applies to vector graphics, and is the part of the process I am referring to.

    In any case, except for possibly reading fewer bytes from disk, the vector case includes all the same compute and memory cost as the raster image - it just has added overhead to compute the bitmap. On modern hardware, this doesn’t take terribly long, but it does mean we’re using more compute just to launch/load things.


  • It’s also worth noting apps have to ship higher resolution assets now, due to higher resolution displays. This can include video, audio, images, etc. Videos and images may be included at multiple resolutions, to account for different sized displays.

    For images, many might assume vectors are the answer, but vectors have to be rendered at runtime, which increases startup time in the best case scenario, and isn’t even always supported on all platforms, meaning they have to be shipped alongside raster assets of a few different sizes, further increasing package bloat. And of course the code grows to add the logic to properly handle all the different asset types and sizes.

    All this (packaging dependencies, plus assets/asset handling) to say it isn’t always malware, ads, electron, etc. Sometimes it’s just trying to make something that looks nice and runs well (enough) on any machine.



  • Worth noting is that “good” database design evolved over time (https://en.wikipedia.org/wiki/Database_normalization). If anything was setup pre-1970s, they wouldn’t have even had the conception of the normal forms used to cut down on data duplication. And even after they were defined, it would have been quite a while before the concepts trickled down from acedmemia to the engineers actually setting up the databases in production.

    On top of that, name to SSN is a many-to-many relationship - a single person can legally change their name, and may have to apply for a new SSN (e.g. in the case of identity theft). So even in a well normalized database, when you query the data in a “useful” form (e.g. results include name and SSN), it’s probably going to appear as if there are multiple people using the same SSN, as well as multiple SSNs assigned to the same person.



  • I’ve personally lived in places where the closest convenience store was 2.25 km, and the grocery store was nearly 18km, as well as places where a convenience store was literally a part of my building, and grocery stores were walkable distances.

    The U.S. is enormous and varied. Take a look at truesizeof and compare the U.S. and Europe (don’t forget to add Alaska and Hawaii - they won’t be included in the contiguous states). Consider how different London is from rural Romania.