"Half the comments here are talking about the vtuber herself. Who cares. It's been talked before. Just imagine if half the thread is discussing what gender she is. What I am interested in is the claims here https://asahilinux.org/2022/11/tales-of-the-m1-gpu/#rust-is-.... (what is it called if it comes with a proof?).The resident C/C++ experts here would have you believe that the same is possible in C/C++. Is that true?"
"watching a virtual persona stream their development of their M1 GPU drivers is one of the most cyberpunk things I've ever seen! it's easy to forget that this world is looking closer and closer to those dreamed up by Gibson, Stephenson, etc. what a time to be alive."
"The fact so much hardware these days is running a full real-time OS all the time annoys me. I know it is normal and understandable but everything is such a black box and it has already caused headaches (looking at you, Intel)."
"> Nobody knew or even cared what the difference was between good and bad data science work. Meaning you could absolutely suck at your job or be incredible at it and you’d get nearly the same regards in either case.In my experience it's even a little bit worse than that. Approaches that are wrong from a statistics point of view are more likely to generate impressive seeming results. But the flaws are often subtle.A common one I've seen quite many times is people using a flawed validation strategy (e.g. one which rewards the model for using data "leaked" from the future), or to rely on in-sample results too much in other ways.Because these issues are subtle, management will often not pick up on them or not be aware that this kind of thing can go wrong. With a short-term focus they also won't really care, because they can still put these results in marketing materials and impress most outsiders as well."
"In a recent past life, I was a HPC (high performance computing) administrator for a mid size company (just barely S&P400) who was in the transportation industry, so I had a lot of interactions with the "data science" team and it was just a fascinating delusion to watch.Our CTO did the "Quick, this is the future! I'll be fired if I don't hop on this trend" panic thing and picked up a handful of recent grads and gave them an obscene budget by our company's standard.The main problem they were expected to solve - forecasting future sales - was functionally equivalent to "predict the next 20 years of ~25% of the world economy". Somehow these 4 guys with a handful of GPUs were expected to out-predict the entirety of the financial sector.The amazing part was they knew it was crap. All of their stakeholders knew it was crap. Everyone else who heard about it knew it was crap. But our CTO kept paying them a fortune and giving them more hardware every year with almost no expectation of results or performance. It was a common joke (behind the scenes) that if they actually got it right, we'd shut down our original business and becomes the world's largest bank overnight.At least it finally gave the physics modelers access to some decent GPUs which led to some breakthrough products, as they finally were able to sneak onto some modern hardware."
"Unfortunately it seemed pretty clear from the start that this is what data science would turn into. Data science effectively rebranded statistics but removed the requirement of deep statistical knowledge to allow people to get by with a cursory understanding of how to get some python library to spit out a result. For research and analysis data scientists must have a strong understanding of underlying statistical theory and at least a decent ability write passable code. With regard to engineering ability, certainly people exists with both skill sets, but its an awfully high bar. It is similar in my field (quant finance), the number of people that understand financial theory, valuation, etc and have the ability to design and implement robust production systems are few and you need to pay them. I don't see data science openings paying anywhere near what you would need to pay a "unicorn", you can't really expect the folks that fill those roles to perform at that level."
"My number one requirement for a tool like this is that the JSON content never leaves the machine it's on.I can only imagine the kind of personal information or proprietary internal data that has been unwittingly transmitted due to tools like this.If my objective was to gain the secrets of various worldwide entities, one of the first things I would do is set up seemingly innocent Pastebins, JSON checkers, online file format convertors and permanently retain all submitted data."
"If anyone wants to try it out, but doesn't want to send them your Json, here's an example of some real world data https://jsonhero.io/j/t0Vp6NafO2p2For me, this is harder to use than reading the JSON in a colour text editor such as VSCode. I'm getting less information on the page, and its harder to scan, but that might be because I'm used to reading JSON."
"Tried it out on some REST response from a local test server.And, well, as much as I applaud the effort, I also think that I'll stick to my text editor for browsing JSON data and to jq for extracting data from it.My text editor because it's easy to perfom free text search and to fold sections, and that's all that I need to get an overview.Jq because it's such a brilliantly sharp knife for carving out the exact data that out want. Say I had to iterate a JSON array of company departments, each with a nested array of employees, and collect everyone's email. A navigational tool doesn't help a whole lot but it's a jq one liner. Jq scales to large data structures in a way that no navigational tool would ever do.Also, there is the security issue of pasting potentially sensitive data into a website."
"This is absolutely awesome! I've had (terrible quality) 3-color e-ink displays and some associated electronics sitting in my office cupboard for a few years because I wanted to prototype this project, but I never did. I'm so glad you did this!Since I just want this to come to fruition, I'll explain what my intended launch strategy was. I'm a developer (just a contractor, not owner) of Tabletop Simulator and do some stuff in that community where there's an overlap between physical tabletop and digital. My plan was to launch a card game (largely designed) and the e-ink cards simultaneously via Kickstarter. However, before that, a digital implementation of the game on TTS. Basically as much as I love this idea, I couldn't see it being monetisable on its own unless you can bring the cost down significantly. It also didn't seem like a defensible business on its own. But if the product is a game that uses said functionality, well, that'd be just swell.Anyway, congrats, this is awesome!"
"This is very cool. I think you'll have a hard time finding a traditional board game publisher willing to put money into this (there might be one or two out there, but most will see this as prohibitively expensive for them), but you might be able to pull off a successful Kickstarter for them on your own.Kind of like the Blinks game system, these little hexes with colored lights in them that each have a separate game in them and can 'teach' the other hexes they connect to.One of the Blinks Kickstarters: https://www.kickstarter.com/projects/move38/blinks-smart-boa..."
"Can I load up whatever images I want? The dream would be proxying Magic cards for playtesting. If you could load net-decks and shuffle via an app over NFC somehow, that'd be amazing as well!Lots of possibilities here!"
"https://archive.ph/wB2s4"
"I married a vegan, and I eat a lot of vegetarian food. (I also still eat plenty of meat, just not every day.)One extremely frustrating aspect of plant meat is that they tried to aggressively push out traditional veggie burgers on restaurant menus. A familiar refrain I've heard in restaurants in the last few years is "we used to have a nice veggie patty, but they replaced it with the beyond/incredible/whatever patty."The thing is, vegetarian food is incredible without needing to taste like meat. When I've had these products, I've always walked away feeling like they taste inferior to traditional vegetarian burgers / sausages that don't try to taste like meat.> Some say the slowdown in sales is a product of food inflation, as consumers trade pricier plant-based meat for less-expensive animal meat.Normally vegetarian food costs less than meat. It's because the animals need to eat (surprise surprise) vegetables! When you eat the vegetables directly instead of having the animal eat the vegetable for your, it's cheaper.IMO, I think the "meat in a vat" system where animal tissue is grown in some kind of factory setting is a much better approach. When I want to eat meat, I want to eat meat."
"This post is crowded with responses from people explaining why they hate Beyond meatFWIW, I've always liked Beyond Meat as a "good enough" substitution for a beef burger that's easy to prepare that is close in texture to a real burger - if you use a burger sauce and cheese like most recipes for non-vegans do, it tastes pretty close. I even liked it so much I'd order it at restaurants wherever available and bought a few shares since I figured they'd do well.I've tried Impossible Burgers too (both prepared by others and myself), and I don't like that it has hemoglobin in it and for some reason it's more in the uncanny valley than Beyond meat is for me - Beyond Meat is just different enough that it doesn't taste to me like moldy beef the way Impossible Burgers seem to.What _did_ eventually get me to stop making Beyond Burgers is that the fat and sodium content is as bad or worse than just plain regular beef. Reducing red meat consumption to once a month or so achieves basically the same benefits, and occasional red meat consumption + no more frequent beyond meat burgers is healthier in the long run than what I was doing before. I think an under-appreciated advantage to the veggie and black bean burger options out there is they tend to be way better w.r.t. fat and sodium, and taste just as good once you give up on the "almost beef" taste of these new-wave alternatives, but Beyond Meat _did_ work for a few people out there and just never tweaked their product enough to seriously compete with cheaper alternatives as inflation has gotten worse."
"Oh, top nostalgia from the 1990s/2000s online. These buttons meant a lot to me. I think I said this on some of my old comments, but I learnt programming because I built a very popular website about a topic/fandom that I was very into back then.There were a lot of websites about that, but only a few were really popular. The structure was very basic: header, left sidebar with links, centered content, and right sidebar with small chatbox, polls, random quotes, and affiliates. The affiliates was simply that: a link to another website in exchange of a link to your website. They started using text links but they usually evolved to 88x31 buttons, first simply JPG then animated GIFs.Being in the affiliates section of the top websites (again, only for this subject) was the best. I remember spent hours and hours (translated into days adn weeks) designing my buttons in MS Paint (yeah, pixel by pixel) so I could convince the webmaster of the popular website to include mine, because a lot of the time they decided based n the button and not the website itself.Anyway, sorry for the long rant (maybe younger people will learn something today), but as a lot of people I miss the old internet and I could talk hours about this!"
"Great site, but is it finished? If not, perhaps it would benefit from a yellow and black flashing "Under Construction" banner."
"Here are even more of them: http://cyber.dabamos.de/88x31/"
"I'm building a decentralized ai marketplace and guess what? This piece of software is the backbone of our off-chain file transfer protocol. It just works; however, its seeder (BitTorrent-tracker) depends on node-webrtc, and it is not an official node package for web-rtc. It has some weird issues. For example, when you seed the torrent through BitTorrent-tracker, and on the client side (chrome browser), I used chrome://webrtc-internals to debug rtc connections. First time in my life, I coined the term connection leakage. A bug in the WebRTC handshake leads to too many connections (300+) for even 1kb torrents.I hope it gets solved very soon.I also attach my node profiling output. Looking for some advise if you have any..[C++]: ticks total nonlib name 5511 16.9% 43.2% epoll_pwait 692 2.1% 5.4% __pthread_cond_timedwait 682 2.1% 5.3% __lll_lock_wait 413 1.3% 3.2% __GI___pthread_mutex_lock 338 1.0% 2.6% __GI___pthread_mutex_unlock 128 0.4% 1.0% __write 84 0.3% 0.7% __pthread_cond_broadcast 83 0.3% 0.6% __lll_unlock_wake 62 0.2% 0.5% __mprotect [Summary]: ticks total nonlib name 153 0.5% 1.2% JavaScript 8206 25.2% 64.3% C++ 116 0.4% 0.9% GC 19811 60.8% Shared libraries 4411 13.5% Unaccounted [C++ entry points]: ticks cpp total name 692 27.3% 2.1% __pthread_cond_timedwait 679 26.8% 2.1% __lll_lock_wait 413 16.3% 1.3% __GI___pthread_mutex_lock 337 13.3% 1.0% __GI___pthread_mutex_unlock 100 3.9% 0.3% __write 84 3.3% 0.3% __pthread_cond_broadcast 82 3.2% 0.3% __lll_unlock_wake 52 2.1% 0.2% __mprotect"
"Besides being an amazing technical achievement, I find this very interesting legally, as it further blurs the line between passively viewing content hosted somewhere and redistributing/actively sharing content.Have there already been cases of websites making their visitors unwitting peers, similar to e.g. JavaScript cryptocurrency mining?"
"P2P connections over the web are usually not possible due to typical consumer router configurations and bad decisions in the design of WebRTC protocol.The vast majority of these P2P web projects, including WebTorrent, is actually using proxy servers to fake the illusion of P2P connectivity (to be specific, they are using TURN servers to proxy traffic).Here's a stackoverflow question I asked about this and burned a 100 bounty on it with no answers: https://stackoverflow.com/questions/70624649/webrtc-fails-to..."
"I am loving all this emacs love lately (I am an emacsophile), but I do find all this attention it is getting suddenly a bit surprising. Is it just that "long lines, LSP, fast syntax hightlighting" is making new people interested, or is it just us neckbeards coming out of the woods? I mean, many of these things are just a package-install away right now. I seldom see vim put in the same lime-light, for instance.Or maybe I am just more attentive to the coverage now?"
"Wow, Eglot/Treesitter/better package support in 29 make me want to try Emacs again.> Install packages from source with package.elEmacs users updating to 29: do you plan to use this instead of Straight now? If not, can you help me to understand what more Straight provides?Emacs on macOS users: do you generally compile new versions of Emacs from source, or wait for ports like Mitsuharu Yamamoto's one[1] to update?[1] https://bitbucket.org/mituharu/emacs-mac/src/master/, used by https://github.com/railwaycat/homebrew-emacsmacport"
"Its great to see both eglot and tree-sitter being merged. However, I am unhappy about the state of 'emacs configurations/distributions' right now. I have been using Doom Emacs, but the development is pretty much stalled there [0], and I don't think there is any distribution that is keeping up with these cutting-edge features (compared to the NeoVim ecosystem, let's say). Somehow it feels like I was seeing a lot more activity about Emacs configurations two-three years ago.> Compile EmacsLisp files ahead of timeOoh, this is interesting. Hoping to see a derivation in https://github.com/nix-community/emacs-overlay soon.[0] I am not complaining though as Doom was the main author's personal config from the get-go. I am just pointing out a void."
"Friends if you dont know: you can add readline support to LOTS of things, especially custom scripts and tools with a prompt by just calling the program with rlwrap.> rlwrap is a 'readline wrapper', a small utility that uses the GNU Readline library to allow the editing of keyboard input for any command.https://github.com/hanslub42/rlwrap"
"I'm using bash (and thus GNU readline) all day every day at work, and by far the most useful shortcut for me is Ctrl-r (reverse history search).I also sometimes use Ctrl-a (start of line, if the keyboard doesn't have a convenient Home key) and Ctrl-e (end of line, in lieu of End key).Btw the post disses Ctrl-s / Ctrl-q for pausing/resuming terminal output, but it can be useful when you're outputting a lot of text (for example tailing a logfile)."
""Shame, then, that even serious command line hackers never bother learning about its capabilities, as they can supercharge your command line productivity."Because it is really confusing if you are not an emacs user."
"No one mentioned AdGuardHome yet?AdGuardHome is far better than PiHole. It's a single Go binary and I think UI is better. It won't break if you upgrade your system. You don't need docker or LAMP stack. Just pull binary and run it. It will even generate systemd service file for you if you need.Edit: https://github.com/AdguardTeam/AdGuardHome"
"I used a Pi-Hole for all devices in my house, including my work MacBook. They manage their MacBooks with JamF, so most things are pretty locked down (including DNS settings in System Preferences). Sudo access is only possible if you open up the Self Service app, login, and issue yourself sudo/admin access for 6 hours. Once it expires, you have to issue yourself admin/sudo access again. No sudo = no changing DNS.I set it and forgot it, until I went to Estes Park, Colorado over the Christmas holidays one year. I travelled with my MacBook just in case anything popped off... and it did. I logged into my MacBook, but quickly realised although I could connect to WiFi as normal, no DNS would resolve (it was pointed to 192.168.1.100 of my home network), and I couldn't connect to anything - including logging in the Self Service app to re-issue sudo access, to change the DNS. I had to walk a new colleague how to handle the scandal over the phone, driving through the mountains... thank goodness for good cell service!"
"I think the best way to set pihole up is to use the docker image, https://github.com/pi-hole/docker-pi-hole/. run it on a pi or any other computer with docker. Upgrades are painless."
"This seems to have been announced a couple of weeks ago, so we don't need to just trust the tweet: https://www.macworld.com/article/1377200/apple-to-limit-aird...Having it open to everyone for only a limited amount of time actually kind of seems like the right thing, but it's hard to come up with a legit explanation for why they rushed that change to China first."
"For context: AirDrop was one of the only ways protesters had to communicate en-masse.Signal is unsafe for Chinese protesters, since it requires SMS verification upon signup and is therefore linked to your identity.[0] Mesh networks are the only real solution there, and AirDrop is about the only mainstream one. AirDrop has been used by Asian protesters for years.[1] I highly recomment you read the full China Digital Times article, which gives excellent context, and lets these protesters explain the value of AirDrop in their own words.[1]Apple's timing is unmistakeably suspicious; keep in mind that there have been protests for weeks, which preceded the iOS update.It must also be pointed out that Apple issued an official statement to Western media outlets that the goal was to prevent spam.[2] At no point did Apple ever admit that this was done to follow any government demand. It's unknown whether Apple could be under a Chinese gag-order, but we shouldn't speculate that it's the case unless experts say it's likely. If Apple is complying with a Chinese government orders, then it has an ethical duty to make that public, and again, we have, as of yet, no reason to assume there are gag orders related to this. Apple deserves criticism for the update, and for trying to hide its true purpose.[0]: https://twitter.com/RealSexyCyborg/status/159707255662827929...[1]: https://chinadigitaltimes.net/2022/11/netizen-voices-apple-r...[2]: https://www.bloomberg.com/news/articles/2022-11-10/apple-lim..."
"I have to admit that when you actually look at the details this is a rather perplexing change. Ditto for reading the comments here.Is Apple being Pro-CCP with this change? If you open airdrop to everyone, your device can be tracked and identified individually. When this setting is disabled, this becomes far harder.I'm of the opinion that this change is good and actually increases security across the board, including for the people who are using it to exchange information during protests since it disables a vector for tracking devices."
"FFmpeg is powerful technology, yet approaching it can be very daunting. So we decided to create an approachable guide for intermediate devs. Feedback is very much appreciated."
"Not to criticize the guide, it's brilliant! If asked for improvements, I'd suggest adding a table of contents.Slightly off topic, but the guide does suggests reading time. It's interesting that people keep using read estimates for technical/scientific/professional documentation. This one says it's a 58 minute read. Not 1 hour, not 59 minutes, but spot on 58 minutes.Now, I'm not a novice in using ffmpeg and I think I'm at least an average person. I can tell you that it would take me a _lot_ longer to read this guide in a meaningful manner.But, a brilliant guide. I'm definitely going to use it to expand my ffmpeg knowledge."
"Wow are you the guy from: "Interview with FFMPEG enthusiast in 2022": https://www.youtube.com/watch?v=9kaIXkImCAM ? :)"
"I fortunately have close friends, but not through any of my own doing. Over the years, there were a lot of friendly extroverts that encouraged me to hang out and join them for burger night, or they’d message me when they were in town. Everyone I dated took the initiative to ask me out, including my wife. I always feel awkward when I reach out to people, as though I’m bothering them somehow, so I tend to avoid doing so. Once someone is a close friend, I go out of my way to maintain the connection, but it’s that in-between stage of “acquaintance” where I have a hard time.I had a bit of a revelation when I left my last job. There were very few comments from coworkers when I left. I don’t think I was disliked (hopefully?), but I don’t think anyone really considered me their friend either. Looking back I think I came across as somewhat of a NPC to coworkers. I preferred to eat lunch by myself and I only discussed the business topic at hand during meetings unless someone else brought up a personal discussion.I wouldn’t mind to start more personal discussions, but I’m always concerned it might come across the wrong way, so the furthest I seem to get is “how was your weekend?”"
"Very brief summary: a personal anecdote about forming good friends followed up by advice for how to have better conversions (vulnerability, curiosity) and how to meet more people who will become friends (meet people but filter out most of them, take initiative with following up), and lastly how to deepen friendships.As someone who has a fair number of close friends, I think this is a good post with lots of solid advice. And it's written well! Personally I've been lucky to just naturally meet and form my best friends over time, but I definitely think taking initiative and following up are important to maintaining a close friendship.A general point in this post I also like is that making and keeping close friends may require work and energy, rather than being something that life just throws your way. I intentionally try to periodically message friends, come out to places they live, just generally keep in touch; as one gets older people move about and it's harder to maintain your closest friendships, but it is possible!Last comment: this is a very pragmatic and analytical post with a lot of discussions of things you can do, and not much discussion of how you should feel. I'd add this - just CARE. Appreciate the people in your life and let that appreciation guide you."
"Great article, though it does give a sort of programmy-algorithmic answer to one of life's great questions. Not saying that's bad, just unexpected.I was actually thinking about it today. Here's an awkward truth about friendships that everyone needs to get comfortable with: often one of you is more interested in the relationship than the other. You know what I mean. For instance, I was invited to a wedding a few weeks ago. It never occurred to me to write to this buddy in many years, I didn't invite him to my wedding, but he invited me to his. I had a great time, we caught up and had a good talk. The same has happened the other way round, I'm sure. People I like a lot and contact, but they don't contact me. Yet when we hang out, everyone has a good time.If you act weird about these relationships, you lose them. You don't want to do that, because marginal relationships are maybe the most rewarding to maintain, IME. People doing different things to you in different places bring a lot more into your mix than the ones you see every day. There also tend to be many of these relationship-seeds, so your close friends will grow from some of them."
"I like it, but the array details are a little bit off. An actual array does have a known size, that's why when given a real array `sizeof` can give the size of the array itself rather than the size of a pointer. There's no particular reason why C doesn't allow you to assign one array to another of the same length, it's largely just an arbitrary restriction. As you noted, it already has to be able to do this when assigning `struct`s.Additionally a declared array such as `int arr[5]` does actually have the type `int [5]`, that is the array type. In most situations that decays to a pointer to the first element, but not always, such as with `sizeof`. This becomes a bit more relevant if you take the address of an array as you get a pointer to an array, Ex. `int (*ptr)[5] = &arr;`. As you can see the size is still there in the type, and if you do `sizeof *ptr` you'll get the size of the array."
"> Everything I wish I knew when learning CBy far my biggest regret is that the learning materials I was exposed to (web pages, textbooks, lectures, professors, etc.) did not mention or emphasize how insidious undefined behavior is.Two of the worst C and C++ debugging experiences I had followed this template: Some coworker asked me why their function was crashing, I edit their function and it sometimes crashes or doesn't depending on how I rearrange lines of code, and later I figure out that some statement near the top of the function corrupted the stack and that the crashes had nothing to do with my edits.Undefined behavior is deceptive because the point at which the program state is corrupted can be arbitrarily far away from the point at which you visibly notice a crash or wrong data. UB can also be non-deterministic depending on OS/compiler/code/moonphase. Moreover, "behaving correctly" is one legal behavior of UB, which can fool you into believing your program is correct when it has a hidden bug.A related post on the HN front page: https://predr.ag/blog/falsehoods-programmers-believe-about-u... , https://news.ycombinator.com/item?id=33771922My own write-up: https://www.nayuki.io/page/undefined-behavior-in-c-and-cplus...The take-home lesson about UB is to only rely on following the language rules strictly (e.g. don't dereference null pointer, don't overflow signed integer, don't go past end of array). Don't just assume that your program is correct because there were no compiler warnings and the runtime behavior passed your tests."
"This looks decent, but I'm (highly) opposed to recommending `strncpy()` as a fix for `strcpy()` lacking bounds-checking. That's not what it's for, it's weird and should be considered as obosolete as `gets()` in my opinion.If available, it's much better to do the `snprintf()` way as I mentioned in a comment last week, i.e. replace `strcpy(dest, src)` with `snprintf(dst, sizeof dst, "%s", src)` and always remember that "%s" part. Never put src there, of course.There's also `strlcpy()` on some systems, but it's not standard."
"Somehow these guys peaked at about 900 employees, according to linkedin (https://www.linkedin.com/company/blockfi)They've raised about a billion dollars of VC - https://www.crunchbase.com/organization/blockfi-inc/investor...(note, CB lists $1.4b, of which 400M is debt from FTX, which I imagine they never got)Unbelievable the amount of destruction of value here... it's just total carnage."
"I feel sorry for the retail investors caught up in yet another crypto scam. Let me try to articulate my view of what’s actually going on with these seemingly endless scams, in hopes of saving future retail investors some pain:Prominent venture capital firms like A16Z, Sequoia, etc have discovered a new get rich quick scheme: they raise a fund and invest it in a some shitcoin like FTX, SOL, etc. the shitcoin founders use the cash to market and shill the coin, hiring public figures like famous NFL players. Retail investors FOMO into these tokens, boosting prices and attracting more retail investors. Once the market cap of the shitcoin exceeds the VC/investor cost basis, they cash out and let the rest ride. Eventually the shitcoin implodes, but by this time the firms are already onto their next fund and another token. A16Z is on their 4th fund now and it’s $4.5B[0]. There’s an entire political strategy at play as well, where shitcoins are employing folks in DC and lining pockets to further the scam.All of this is possible because there is no regulation on crypto tokens. In reality these are unregulated securities and these large VC firms are exploiting a loop hole for profit. This goes for literally every token starting at ETH and below.[0]: https://a16zcrypto.com/"
"With projects like blockfi, users get all the disadvantages of the centralized traditional financial system without any of the regulatory protections that have built up over centuries. Terrible for the people with assets in blockfi but extremely predictable. The point of defi is that you can use financial services without a centralized counterparty. People who can’t handle their own keys should be using traditional banks, at least until the technology is improved. Compare to Aave and other actually decentralized lending protocols which are not at risk of bankruptcy."
"People here talking about how they don't want YouTube to make a profile of their preferences, and here I am, wishing YouTube had a better profile of my preferences.Lately, there is almost no new videos in my recommendation feed. It's mostly either the things I've already watched or new videos from the channels I'm already subscribed to. It really feels like I've exhausted the internet at some point. This can't possibly be true now, can it? :')Where do I opt-in for more tracking?"
"I love Freetube, and try to contribute to people directly if I'm going to use Freetube.It'll truly become killer when I can save multiple playlists, like I can on Newpipe. Sadly right now you're stuck with one playlist of "Favourites", and then copy-pasting a playlist link from YouTube to queue things up."
"More YouTube alternatives and privacy frontends:- https://www.privacytools.io/youtube-alternatives/- https://www.privacytools.io/privacy-frontends/"
"> In the Chinese app store market [there are] five to ten commonly used app stores, and yet even the largest has less than a majority market share. Most Chinese people have more than one app store on their phone, so there is no monolith there, whereas “outside of China, Apple and Google control more than 95 percent of the app store market share”. Ecosystems with multiple app sources work, and governments around the world believe that monopoly forces are what keeps Google Play and Apple App Store dominant."
"Google is pretending that there's user choice in app stores on Android, but it's very clear that Google Play gets preferred treatment in various ways.I just switched Android phones last week, and the system offered me to copy over everything through plugging a USB C cable into both phones. To my surprise, Google/Android copied over all apps installed through Google Play, and just ignored all F-droid installed apps! Not even a notice or message that not everything could be copied..."
"This article seems at odds with itself if I'm reading it correctly.The augment put forth is partly that an app store / hub not accepting all possible apps is a form of censorship, but simultaneously asserts that that there are too many apps and an app store that doesn't accept all apps is providing a "curation" service that is beneficial to the user.Maybe they are trying to make a delineation between a large scale app store and a small scale app store, but the point doesn't feel well defined or fleshed out.Unsure what was meant by the statement "The freedom to get apps will always be in tension with the things that people want to keep out of their life"... applications being available to install doesn't mean you have to install them.I agree that the iOS App Store and Google Play store enjoy a monopoly over much of the world, but I don't see that China has a less censored market. I understand China to have far greater government censorship over the internet and their applications than most other regions, so having a more even market share over multiple app stores doesn't necessarily improve the censorship issue."
"This is why I love my 80 series Land Cruiser (carb, not EFI), it's really simple that a lot of maintenance can be done with just wrenches. And the charm of having almost no electronics[1] on the car control is that if something is wrong, you will know it by the sounds it makes, and you will have a lot of times to fix it before it's completely toast. I have heard of horror stories about newer cars with electric ABS accumulator that stopped working on the high way, not with my FZJ80[2].[1] The so called 'emission computer' unit on the car is a simple pulse counter/comparator that activates a VSV on the carb to reduce backfire while descending downhill with the foot off the throttle.[2] My brake booster is leaking a little bit, but at least it won't suddenly gives up on me while I'm riding, still looking for a replacement."
"I wish we could get these in the US. We lack basic cars anymore. This is why I still drive a 1988 Suburban. Yes, it requires a lot of maintenance (It's almost 35 years old!) but it's simple, and tough as nails and, in general, simple to repair."
"I have a Toyota Land Cruiser here in South Africa with the petrol/gasoline engine. Amazing car. It's built to last though, so its on-road handling is not as great as modern cars."
"This is part of the bigger Macpine project, which to me is much more interesting than LXD: https://github.com/beringresearch/macpine""" The goal of this project is to enable MacOS users to:Easily spin up and manage lightweight Alpine Linux environments. Use tiny VMs to take advantage of containerisation technologies, including LXD and Docker. Build and test software on x86_64 and aarch64 systems """"
"So is there some canonical guide to running a docker compose style app on Mac m1 machines that has good filesystem performance? It seems like there’s many ways to approach the topic now so it’s hard to tell which one is “winning”.I’d love to containerize all of my local development efforts (scripts and rails apps) but the slow ass filesystem always ruined it in the past."
"This is cool and a worthwhile thing, but how is this different than the many (b/x)hyve clones and others based on QEMU that use MacOS’s virtualization framework to run a minimal Linux for containers? What’s the differentiator that makes this better (hopefully?) from what’s come before?"
"In case anyone from the US reads this, BSPP and BSPT fittings are rare and incredibly frustrating here, as our NPT (National Pipe Taper) threads are different and the selection of BSP(P/T) fittings is extremely poor in comparison.Also, I work with NPT fittings quite a lot:> For what it’s worth, I tightly wrap the tape 10 times round the male thread and get an enraged mountain gorilla to tighten it up.This is a WTF NO!!! for NPT and I’ll assume a WTF NO!!! for BSPT as well. You need about 1.5 wraps of PTFE tape to seal a fitting. Any more is wasteful and asking for leaks (or damage, if you’re using plastic fittings). It helps if you use the correct tape width for the fittings (1/4”, 1/2”, and 1” for me) and develop a wrapping method that keeps the tape under tension at all time and in such a direction that threading it into the fitting doesn’t unwrap the tape.Also, in my experience, when someone inexperienced first learns what pipe tape is, they try to apply it to everything. 20 wraps around a tapered pipe? Wrap a Swagelok fitting? Try to make a butt joint or an adapter for two pieces of plastic tubing? I’ve seen it all."
"This article doesn't touch much about why plumbing is hard. I'm from Poland so I'm not only IT but also a plumber ;)Plumbing is hard because it is not forgiving. It's as binary as IT except you can learn the outcome with some delay, once you learnt about a damage caused by a leak. Either you do a pressure tests right or repair can be expensive. And bugfixing is always tricky.Water also goes down whether you like it or not. Think about all possible leaks inside the shower cabin. Or what is even more impressive that under a pressure the water goes everywhere possible.Plumbing is similar to electrical engineering, except it usually doesn't kill immidiately (though working with gas is tricky anyway) but requires similar strict mental model to do right.And when you see a plumber it seems like this person is just a physical worker. So work status misconception must be leveled with money..."
"I've helped out with some plumbing work in an older house, and it's pretty fascinating to see the progression of technologies.100 years ago, most drain pipes in the US were massive cast-iron pieces with no threads at all. They were mated together, then the joint was filled with a compound called oakum. To really hold it together, the plumber would pour molten lead on top of the oakum. Just taking that stuff apart is a lot of work. I can't imagine putting it together as well, especially for 40 hours a week.I agree with the author's dismay about threaded fittings, but 100% disagree about PTFE tape versus thread sealant. PTFE tape is garbage. If you use thread sealer the way it's supposed to be used (put on a decent amount, then thread the pieces together with the "nudge and a grunt" technique instead of cranking down on it with a huge amount of force), it will seal perfectly almost every time, and any minor leaks can usually be fixed by tightening the joint slightly. If that's not enough, just take it apart and redo it. I've rarely had to try twice, and never three times.Not sure about British threaded pipe, but NPT threaded pipe actually doesn't benefit from being tightened beyond a certain point because of the way the threads are designed. I redid the seals and some of the fittings[1] on all the antique hot water radiators in a house because no contractor within a day's travel would work on antique hydronic heating systems. Good quality thread sealant, no garbagey PTFE tape, no leaks, even in constant use.That having been said, modern pipes and fittings make things dead simple. PVC (or ABS, but PVC is nicer IMO) for drains, push-to-connect fittings for water lines (I like PEX, but I know opinions vary). No lead, no torches. Easy to cut with hand tools. Lightweight. Anyone who's interested can probably do at least basic work with modern pipes.[1] https://youtube.com/watch?v=MeHiE-j1KuQ"
"This problem first occurred in the mid-'90s, and people saw it was a problem, and that's why we specified the image dimensions in HTML (before the GIF or JPEG header was loaded), so the layout wouldn't shift during load, when people were already reading and navigating.Since almost the beginning, graphic designers were trying to layout Web pages like an advertising brochure, lots of bad ways, rather than as hypertext. When, in a sense, they finally did take control of the Web, HCI (usability in service of user) had evolved into "UX" (advertising-like graphic design in service of non-users).There's often disincentive for things like "user can understand what the controls are and what the state is", "user can focus on the information they want", etc. UX won't care about these things until some time after users are so aware of the problems that it affects engagement metrics and eventually A/B testing figures out why.I'm imagining a eureka moment, during which a UX researcher, trying to find a new, improved dark pattern, to screw over hapless users harder, accidentally lets slip in some old-school good usability, which wasn't even supposed to be in the test sets, but discovers that this strangely resonates with users."
"A particular bane of mine is the self-oscillating UIs, youtube is particularly bad at it these days - if the mouse pointer is in 'just the right place' (which is bigger than it sounds) then you get the seek-bar preview frames popup, which moves things just enough that the mouse pointer is no longer over the area that triggers it, so it vanishes, and the whole thing starts again."
"Way back when I first started learning to build web pages [when HTML4 was just a glint in Tim Berners-Lees eye], it was conidered bad form not to prvide a 'width' and 'height' attribute for an image you were placing on the page.Because if you provided these, the browser would know how much space to leave for the image to occupy after it had downloaded [this was back in the dial-up modem days. So images usually loaded gradually after the rest of the page] and so the layout of the page would not change. Conversely, if you didn't provide 'width' and 'height' attributes, the browser would only leave a default small icon sized space for the image and then you'd have the page content move and shuffle about as each image downloaded and the rest of the page content got elbowed out of the way, to accommodate it.It's funny how such basic concepts of usability seem to have fallen by the wayside in our new modern Web3 [or whatever version buzzword we're on now] in favour of moving content, modal overlays and the like. And, since so many sites these days just aggregate content from countless other external sources, even that basic notion of pre-defining your image sizes is often neglected too."
"Back when the Apache OpenOffice project was still alive, the LibreOffice project used git notes to track, for each commit to Apache OpenOffice, whether it was ignored (usually for being specific to the obsolete openoffice.org build system), redundant with a libreoffice commit (usually because the LibreOffice project had already fixed the same issue in a better way several years before), or cherry-picked into the libreoffice tree (this was the rarest case; in these last two cases, it also pointed to the relevant libreoffice commit); an aoo commit without git notes meant it had not been looked at yet. While the LibreOffice project stopped doing that a couple of years ago (probably because there was no longer anything interesting happening in the aoo repository), you can still see the branch they used to mirror the Apache OpenOffice tree, still with the git notes, at https://git.libreoffice.org/core/+/refs/heads/aoo/trunk (follow the parent links to see more, or clone the repository locally and look at that branch with gitk, which also shows the git notes)."
"I've tried to use git notes over the years but unfortunately notes are tied to a specific commit hash. It's a blessing and a curse.Works great for some types of review system, or for "tagging" things related to deploy. Notes on commits on the master/main branch, which doesn't get rebased? Awesome thing, they work.But you can't as easily use them on branches: the moment a branch whose commits had notes is rebased, and SHAs change, good-bye notes associated with the "previous" SHAs :/"
"Speaking of git’s plumbing, https://news.ycombinator.com/item?id=33566991 is the best guide I’ve found. It made everything simple. I didn’t realize git is just a list of blobs and trees, and that’s it. And when other people said that, I didn’t understand the meaning.Blobs = any file, keyed by sha1. Also commits (pointer to a tree). Also trees (list of blobs).Refs (aka branch): points to a specific commit.I bet notes are just a blob that points to a commit."
"At a place I used to work for, the four day week question was put to the CEO at an all-hands company town hall.Not a fan of the idea, he scoffed and said something like, ‘I pay you to be at your desk from 9am-5.30pm Monday-Friday. Why should I pay you the same for a day less?’I don’t think he realised it at the time, but that answer was devastating to company productivity and morale. He’d just demonstrated to everyone that he didn’t value results and all that was important was bums on seats.People stopped putting in extra effort, waited out their hours as that was all that was required and started brushing up their CVs. I left not long after and so did many others."
"In the US nuclear navy, the power plants have to be manned 24 hours a day both operating and shutdown. Ships in maintenance periods also generally go through a phase where ship personnel have to support maintenance work around the clock. Additionally, some parts of the training pipeline are run 24 hours a day.I say all that to say that while I am not a subject matter expert, I have a significantly above average amount of experience working various types of rotating shift work as well as duty rotations (a duty rotation is working 24 hours every 2, 3, 4, or 5 days depending on available manpower. Yes, you read that right - for a month, I was at work 28 to 32 out of every 48 hours).Forget 4 8 hour days, I would work 4 10 hour days right now in a heartbeat with no discussion or regrets.1) it is invaluable to have a normal working day where you can do tasks - change your oil, get your haircut, go grocery shopping without a crowd, see a matinee, the list is endless.2) the scope of weekend trip you can plan across 3 days instead of two is exponentially higher. So much room for activities.3) time after the working day just isn't that useful. Drive home, eat dinner, now it's 630/7pm. Waste a couple hours, go to sleep. After a 10hour day, the time after work is precious and useful to relax, but then you get a whole other day off.It surely isn't for everyone, but it surely is for a lot of people. The thing that blows my mind is no one is even willing to try."
"The key to this in my view is having it be an officially "approved" option. Working at a FAANG company in the UK, I could already afford to take a 20% pay cut and work a 4 day week. I would love to do so in fact - but since that working pattern is relatively uncommon, I don't have the confidence that colleagues would respect or acknowledge it - basically I see myself spending an excessive amount of time telling people that I am not available on Friday for that meeting, and no I won't make an exception just for this week. Or, I would make an exception and get sucked into work - and now the company is getting 20% of my time for free.If a 4-day week was more widespread, I would have more confidence in maintaining it - and on the odd occasion I have to work the extra day, I wouldn't feel so bad given the 100% pay model described here."
"Once again, absolutely amazing. Those are more details of a really interesting internal CPU bug than I could have ever hopes for.Ken, do you think in some future it might be feasible for a hobbyist (even if just a very advanced one like you) to do some sort of precise x-ray imaging that would obviate the need to destructively dismantle the chip? For a chip of that vintage, I mean.Obviously that's not an issue for 8086 or 6502, since there are more than plenty around. But if there were ever for example an engineering sample appearing, it would be incredibly interesting to know what might have changed. But if it's the only one, dissecting it could go very wrong and you lose both the chip and the insight it could have given.[1]Also in terms of footnotes, I always meant to ask: I think they make sense as footnotes, but unlike footnotes in a book or paper (or in this short comment), I cannot just let my eyes jump down and back up, which interrupts flow a little. I've seen at least one website having footnotes on the side, i.e. in the margin next to the text that they apply to. Maybe with a little JS or CSS to fully unveil then. Would that work?[1] Case in point, from that very errata, I don't know how rare 8086s with (C)1978 are, but it's conceivable they could be rare enough that dissolving them to compare the bugfix area isn't desirable."
"When I was at C-Cube Microsystems in the mid 90's, during the bring-up of a new chip they would test fixes with a FIB (Focused Ion Beam). Basically a direct edit of the silicon."
"The obvious workaround for this problem is to disable interrupts while you're changing the Stack Segment register, and then turn interrupts back on when you're done. This is the standard way to prevent interrupts from happening at a "bad time". The problem is that the 8086 (like most microprocessors) has a non-maskable interrupt (NMI), an interrupt for very important things that can't be disabled.Although it's unclear whether the very first revisions of the 8088 (not 8086) with this bug ended up in IBM PCs, since that would be a few years before its introduction, the original PC and successors have the ability to disable NMI in the external logic via an I/O port."
"A fun fact (and I'm not violating any NDA here) my client https://paireyewear.com is growing very fast and last year they raised $73 million from VC investors. They have their main store on Shopify, so they didn't need too much on the backend: only their CRM for keeping customers happy and handling returns and damaged goods and refunds and special sales. Also, on the backend, they had all the integrations with the 3PL, that is, 3rd Party Logistics, which is to say, they worked with external warehouses, because they weren't ready to have their own warehouses. They did have integrations with the warehouses so they could track inventory levels.So the fun fact is this: they were running entirely on free Heroku dynos. This year they successfully transferred to AWS, but they got to a fairly big scale while running entirely on free Heroku dynos. I'm still kind of amazed by that."
"Just moved off my 8 years old project google-webfonts-helper https://github.com/majodev/google-webfonts-helper from their free tier to my own private infra and replaced the current dyno with a 301 handler: https://github.com/kenmickles/heroku-redirectAFAIK sadly Heroku does not provide some other _free_ permanent redirect option for their *.herokuapp.com sub-domains without actually running a dyno there."
"To be fair, I have had quite a few emails form Heroku telling me to move my stuff. Even after I thought it was already shut down, I keep getting emails. Everything I have on there is 5+ year old hobby things that I won't bother hosting elsewhere, it's nice to know they are paying more attention than I am to my projects' fates"
"Ken White (of "Popehat" twitter fame) has a great article explaining how this and similar court cases during the first world war are the origin of the (poor) "fire in a crowded theater" argument against certain kinds of free speech:https://www.popehat.com/2012/09/19/three-generations-of-a-ha..."
"The supreme court case Schenck v. United States said at the time that speech discouraging people from being drafted was prohibited.https://en.m.wikipedia.org/wiki/Schenck_v._United_States"
"The thing that amuses to me is a lot of the laws and rhetoric around these matters presume a draft. But we haven't had that for years.Instead, you had folks who signed up for the national guard thinking they'd just screw around with fancy guns in the woods 2 weeks a year absolutely apoplectic they might actually be put in harm's way.I remember still being a teenager in Appalachia and being made to feel like you're a hair shy of an agent of foreign power if you suggested the best way to "support the troops" was to say no illegal wars of aggression that put them in harm's way instead of slap a yellow ribbon magnet your car and say let's bomb Iran too.You haven't truly lived until you tell some aggressive moron wh very purposefully signed up for the infantry because they wanted to kill people of color in an illegal oil war that you don't give a single solitary fuck what they think, you don't thank them for their so called service and that many Marines died at Okinawa died for your right to say that, and that if they don't get their hands out of their pockets and step back you're going to invoke stand your ground and call their wing commander or whoever the fuck is in charge of them to collect the body.(Many, many folks sing a song about the constitution but break down when you use it for anything other than greasing the wheels of the military industrial complex, and it's DIGUSTING.)"
"The things I’m drawn to in life is where art meets science. In hindsight, so much of the secret is knowing how to avoid failure. Baking bread? Build the intuition over time and you’ll realize baking is forgiving so long as you don’t do these “5 bad things.” Gardening/farming? Yeah, there’s a big list of bad things. Brewing beer? Another list of things to avoid. The basic rules (rooted in science) are like guardrails and everything else is the art. I love this so much.In my early 20s I had a week long mind meld knowledge transfer from a self taught photographer. It made me fall in love with photography. I’m still using it to this day to photograph new label printers (black plastic is terrible to photograph) and labels (oh god they are 2D!).I’m doing an OK job. Room for improvement but fine for the initial launch. You can see them here: https://mydpi.com/products/professional-synthetic-direct-the...In case you’re like “why is this guy selling label printers?!”I’m a solo software dev that wrote Label LIVE (electron) to design and print labels. Now I’m vertically integrating with a printer I’ve imported from China and labels made in the USA.Business and entrepreneurship: just avoid these 9999 things and you’ll be fine! Science and art…"
"If anyone wants to move beyond using the "auto" setting on their camera (or phone), I would recommend the book Understanding Exposure by Bryan Peterson, the first edition of which was published in 1990:* https://www.goodreads.com/book/show/142239.Understanding_Exp...The principles involved haven't changed much in the intervening decades; the current fourth edition was publish in 2016.If all you have is a phone you don't have to get new equipment: just perhaps a third-party 'camera app' that allows you manual control of aperture, shutter speed, ISO/sensitivity.Once you know how each of these settings alter the resulting photo you can use them to alter the composition of photos, which is a whole other craft.Edit: seems recent smartphones have little-to-no adjustable camera settings."
"This is very well done for a new-to-photography audience. Will be sharing around to people who say all their things look like snapshots, what's up with that.Great use of examples, except for one: kid on bridge.> At the same time, it must be said that color and tone can be what separates a mediocre photograph from a memorable one. To illustrate, let's look at the potential evolution of this vacation shot deliberately chosen for its mediocrity...Then the dynamism is removed by 'correcting' the dutch angle to horizon, the surprisingly good color balance is skewed off, and the whole thing gets that circa mid-2000s HDR look from Flickr and Shutterfly and the like where every photo got tone-mapped.Underwhelming of an end result, especially compared to the later color and tone examples (e.g. kitchen superhero)."
"To me the other algorithms described in the list are more novel and interesting:https://madebyevan.com/algos/crdt-tree-based-indexing/ - for when precise order is critical, like paragraphs in a document. This algorithm is almost like storing adjacency information like a linked list, but is more convergent. Very interesting for [my use-case](https://www.notion.so/blog/data-model-behind-notion).https://madebyevan.com/algos/crdt-mutable-tree-hierarchy/ - for tree-shaped data, like blocks in a Notion page that should have exactly one parent, but allow concurrent re-parenting operationshttps://madebyevan.com/algos/log-spaced-snapshots/ - log space snapshots, for choosing what fidelity of historical information to store. For context, many CRDTs for rich text or sequences store unbounded history so that any edit made at any time can be merged into the sequence. For long-lived documents, this could be impractical to sync to all clients or keep in "hot" memory. Instead, we can decide to compact historical data and move it to cold storage, imposing a time boundary on what writes the system can accept on the hot path. The log-spaced snapshots algorithm here could be used to decide what should be kept "hot", and how to tune the cold storage."
"Anyone unsure of what a CRDT is (I think everyone on HN must know by now), this is the perfect intro: https://www.inkandswitch.com/peritext/The two most widely used CRDT implementations (combining JSON like general purpose types and rich text editing types) are:- Automerge https://github.com/automerge/automerge- Yjs https://github.com/yjs/yjsBoth have JS and Rust implementations, and have bindings to most online rich text editors.CRDTs are addictive one you get into them."
"CRDTs are often talked about in the same breath as collaborative editing software, but they're useful for much more than that.They really are a theoretical model of how distributed, convergent, multi-master systems have to work. IE the DT in CRDT could be a whole datastore, not as just an individual document.(Wish I could remember who on HN alerted me to this. I had read the paper but didn't grok the full implications)."
"Slightly off topic...Back before my Phoenix/Elixir days, we used to have to cache almost everything. Now we don't cache anything -- even with more traffic and less servers. Now, the projects I work on are not getting millions of views every day. But they are still important for many customers.We don't really design anything that needs to be cached anymore. Just better access patterns and table design as we get wiser.Take Wordpress as an example. There really should not be a need to cache a Wordpress site. But some of the most popular plugins are those that cache. To be fair, I suppose, their legacy non-optimal database design access pattern and code is responsible for this.Just getting something up and aiming for product-market fit is priority #1 and then struggling with caching if it gets popular, at which point, more resources can be thrown at it."
"> Cache coherency ensures that the behavior is correct, but every time a cache is invalidated and the same memory has to be retrieved from main memory again, it pays the performance penalty of reading from main memory.1. First no. If any such rereads occur, they will be from the LLC (last-level cache, or L3 cache for Intel/AMD CPUs).2. Second no. IIRC, modern caches are snooping and/or directory caches. This means that when Core#0 says "I'm changing cacheline X", Core#0 knows that Core#1 has it in its L2 (or deeper) caches. So Core#0 will publish the results of that change to Core#13. The examples they gave are missing read/write barriers, which are rather important. #2 will only happen during read/write barriers. That is to say: your code is memory-unsafe by default. If order and cache-invalidation matters, you actually need a barrier instruction (!!!) to make sure all these messages are passed to the right place at the right time.---------For better or for worse, modern CPU design is about doing things thread-unsafe by default. If the programmer recognizes a problem may occur, it is the responsibility of the programmer to put the memory barriers in the right place.This is because the CPU is not the only thing that can reorder things... but also the cache system... as well as the compiler. So memory barriers inform the compiler + CPU + cache system when to synchronize.The original Netflix article is more precisely worded.> This consistency is ensured with so-called “cache coherency protocol.”With a link to MESIF (F for Forwarding state, which is one such protocol / way to share L1 cache info without going all the way to LLC or DRAM)I'm pretty sure that GPUs actually invalidate the cache dumbly, without any MESI / MESIF (or whatever). GPUs are actually really bad at these kinds of ping-pong operations and synchronization... preferring thread-fences and other synchronization mechanisms instead.------That being said, I think the blogpost is a good introduction to the subject. But note that its a bit imprecise with some of the low level details. I guess its correct for the GPU-world though, so its not completely inaccurate...-----The original Netflix article has an even more difficult problem revolving around Java's Superclass cache and how it is affecting "true sharing" of caches. I'm not very familiar with JVM internals, so I lost track of the discussion at that point."
"It has earned its title as "one of the two hardest problems in computer science": cache invalidation, naming things, and off-by-one errors."
"I'm not laid off but actively looking to change after spending a couple of years at my current employer. Despite being a staff engineer, managing 12 engineers, and having a solid revenue stream tied to my current team - I've mostly gotten rejections without interviews, 1-2 low ball offers, or radio silence. I cannot imagine how hard this must be for those laid-off, hopefully this storm passes soon.EDIT: you cannot make this up, it's a saturday and we got an email 1/3 of our team got laid off (I didn't yet, but I have a feeling it might happen soon)."
"I keep getting the same amount of emails from recruiters as before, mostly startups.Found a new job (non-startup) by responding to one of those. Signed the offer and was going to quit FB on a certain date, then got the FB layoff severance package a week before that date as a nice bonus.I also interviewed at Google and got the thumbs up to proceed to team matching, but no team matches after a month. This makes me believe that they have at least a partial hiring freeze, although their recruiters are pretending that this is not the case - they're just saying that team matching takes a bit longer. Google interviews were useful as practice for the other jobs, but not for actually getting an offer."
"For those of you feeling down about being ignored, keep in mind this is the slowest hiring time of the year, with holidays and vacations, and end of year budgets.Even in down markets, hiring tends to pick up in January as managers get their hiring budgets for the new year and are back in the office.Hang in there!"
"Wow. This gives a lot of false positives, but it found all ~10 of my old accounts over the years.The most interesting thing is that my writing style changed pretty drastically since a decade ago. Searching for my oldest account matches my earliest usernames, whereas searching this account matched the rest.The details of the algorithm are fascinating: https://stylometry.net/about Mostly because of how simple it is. I assumed it would measure word embeddings against a trained ML model, but nothing so fancy."
"The method used, i.e. to calculate the cosine of the two authors' word vectors, is poorly suited for stylometric analysis because it is based on a poster's lexicon and the word frequencies of each word, but ignoring stylistically relevant factors like word order.Also, the cosine of the vectors of word frequencies conflates author-specific vocabulary and topics; in other words, my account is grouped (with >51% similarity, according to the demo) with someone probably because we wrote about similar things. A strong stylometric matcher ought to be robust against topic shifts (our personal writing style is what stays constant when we move from writing about one topic to writing about another topic, just like our personality is what stays constant about our behavior over time - of course styles do change, but the premise then has to be that such changes happen very slowly).Stylometrics/authorship identification is interesting and has led to some surprising findings, e.g. in forensic linguistics (Malcolm Coulthard wrote several good books about the topic).This paper lists some other features that could be used and compares a bunch of techniques: https://research.ijcaonline.org/volume86/number12/pxc3893384..."
"Ha, gruseom shows up for pg, which is dang’s old account. A worthy successor.This is a fascinating way to find similar HN users who aren’t the same person. It’s a surprisingly great recommendation engine. “If you like pg, you might also like…”Sure, the privacy concerns are valid, but the cat’s out of the boot. Might as well enjoy the benefits.montrose is almost definitely pg. Someone who talks about ancient history, Occam’s razor, VCs and startups, uses the phrase “YC cos” (relatively uncommon), etc. https://news.ycombinator.com/item?id=17112567Nicely done. One of the best hacks I’ve seen in a long time."
"It's interesting that the article's thesis is about reading, but most of the article is actually about writing. And I think that's an understated point. I myself wrote a blog piece about "Blogging as Structured Thinking" earlier this year.I think that actually plenty of people do reading in various forms of content. The real challenge is getting people to do more writing.If you want to be a thinker, you have to write.It really forces you to address your ideas more formulaically and concretizes your theses.Start a blog! If you're reading this chances are you know how to buy a domain and spin up a blog in less than 30 minutes. Try Wordpress, or hugo with templates if you want more control. And if you don't know what to write about, this link was recently shared on HN, I thought it was pretty useful: https://simonwillison.net/2022/Nov/6/what-to-blog-about/And yes, it's important to publish it. It makes your thoughts real. And ideas were meant to be shared."
"Many of us are programmers. Some of us are also trying to be writers. Trust me: writing is like programming. Reading code is very useful to become a better programmer, but you learn programming mostly by writing code. Similarly you mostly learn how to write prose by writing prose, tons of it. Reading is especially useful if you identify certain books that are very high in style (for me one of such books was "Vite di uomini non illustri" by Giuseppe Pontiggia), for your taste at least, for what you beleive the best writing is. You read these books many times, to understand what's going on, what are the patterns, how to do the same magic. As a casual reader you can read 200 books every year and yet remain a terrible writer.EDIT: more about that on my blog if you care -> http://antirez.com/news/136"
"I think there is clear historical evidence that this thesis is, at a minimum, greatly exaggerated. Socrates never wrote, and I think he had more good ideas than Paul Graham ever will. Muhammad was not even literate, and unless he was inspired by divinity, his ideas were extremely powerful.I mean, I do personally find that writing is a powerful tool for thinking. Maybe that means that Paul Graham and I are normal, and Socrates and Muhammad were atypical. But maybe it says more about humans-in-our-society than it does about the essential human condition. If humans learned "by tape" (as per the SF books from the Silver Age, referenced in TFA's opening para), maybe idea-production would work along different lines.I admit, I tend to agree with him about the usefulness of writing. But I think it's just an irrational intuition, not the clear argument he implies."
"Yes, sequential I/O bandwidth is closing the gap to memory. [1] The I/O pattern to watch out for, and the biggest reason why e.g. databases do careful caching to memory, is that _random_ I/O is still dreadfully slow. I/O bandwidth is brilliant, but latency is still disappointing compared to memory. Not to mention, in typical Cloud workloads, IOPS are far more expensive than memory.[1]: https://github.com/sirupsen/napkin-math"
"I question the methodology.To measure this I would have N processes reading the file from disk with the max number of parallel heads (typically 16 I think). These would go straight into memory. It's possible you could do this with one process and the kernel will split up the block read into 16 parallel reads as well, needs investigation.Then I would use the rest of the compute for number crunching as fast as possible using as many available cores as possible: for this problem, I think that would basically boil down to a map reduce. Possibly a lock-free concurrent hashmap could be competitive.Now, run these in parallel and measure the real time from start to finish of both. Also gets the total CPU time spent for reference.I'm pretty sure the author's results are polluted: while they are processing data the kernel is caching the next block. Also, it's not really fair to compare single threaded disk IO to a single process: one of the reasons for IO being a bottleneck is that it has concurrency constraints. Never the less I would be interested in both the single threaded and concurrent results."
"> I haven’t shown an optimized Python version because it’s hard to optimize Python much further! (I got the time down from 8.4 to 7.5 seconds). It’s as fast as it is because the core operations are happening in C code – that’s why it so often doesn’t matter that “Python is slow”.An obvious optimization would be to utilize all available CPU cores by using the MapReduce pattern with multiple threads.I believe that'd be necessary for a fair conclusion anyway, as you can't claim that I/O isn't the bottleneck, without utilizing all of the available CPU and memory resources."
"We have used this many times during GitHub outages. It's great and does what it says.But just one word of warning: when you run `act` with no arguments, what does it do? Displays usage? Nope -- it runs all workflows defined in the repo, all at once in parallel!This seems like a crazy default to me. I've never wanted to do anything remotely like that, so it just seems both dangerous and inconvenient all at once.Nice otherwise though..."
"This piece of software would have to handle all the intricacies of the GitHub actions but also be updated to the latest changes...We are moving back to a makefile based approach that is called by the GitHub workflows. We can handle different levels of parallelism: the make kind or the indexed by worker number when running in actions. That way we can test things locally, we can still have 32 workers on GitHub to run the full suite fast enough.I also like that we are less bound to GitHub now because it has been notorious unreliable for us this past year and we may move more easily to something else."
"Finally! I've been wanting something like this for ages- generally speaking I don't consider myself an idiot, but I'm forced to pull that into questioning every time I test and debug ci/cd actions with an endless stream of pull request modifications"
"Stunning. Definitely worth fixing the link for! https://www.djfood.org/fantasy-jodorowsky-tron-visualisation..."
"This is simply amazing. There were so many images there that are photorealistic enough that I was trying to figure out was this a real live action project? Or had someone simply staged the images as part of a larger "art project " or something...Took me by surprise that this was AI generated, although in hindsight it should have been obvious. For me, this is the moment I realized AI art had become something "useful", and the world isn't going to be the same.I can see where this is going... The commercial implications are enormous. Speeding up the concept art process for movies, etc. As someone here mentioned, why not make entire movies this way? Once they figure out how to animate this stuff, it puts the movie industry out of business.I can only imagine what my grandkids are going to be using this for."
"Man do ppl even know Jodorowsky?Cause he would definitely shit on AI generated art. He was bored of mindless American shit long before anyone was complaining - https://www.youtube.com/watch?v=xNQZF0KF-zw"
"Those looking for a proper and comprehensive introduction into genomics from a programmer's perspective should try the Biostar Handbook:https://www.biostarhandbook.com/I have learned so much from it.It is an introduction into what is like to do genomics in a scientific environment. The content at the link the OP posted appears to be an oversimplified, high level and naive overview"
"I have absolutely loved working in genomics. I am a huge believer that genomics will be a huge part of healthcare in the future, and i have two examples to motivate that point that I think may be interesting to the reader.1) The Moderna vaccine was made with the help of illumina genome sequencing. They were able to sequence the virus and send that sequence of nucleotides over to moderna for them to develop the vaccine - turning a classically biology problem, into a software problem, reducing the need for them to bring the virus in house.2) Illumina has a cancer screening test called Galleri, that can identify a bunch of cancers from a blood test. It identifies mutated dna released by cancer cells. This is huge, if we can identify cancer before someone even starts to show symptoms, the chances of having a useful treatment dramatically go up.Disclaimer: I work for illumina, views my own.I wrote some more about why genomics is cool from a technical point of view here (truly big data, hardware accelerated bioinformatics) : https://dddiaz.com/post/genomics-is-cool/"
"Really glad to see this, but it reminds me of the earlier HN post that said engineers don't go into genomics because it doesn't pay and requires a lot of investment in learning biology."
"see : https://news.ycombinator.com/item?id=33741300related : https://news.ycombinator.com/item?id=33686599"
"My personal favorite outcome of this would be a joint public and corporate funded leap in open source development. This would do much for the budget, privacy and probably also security of businesses and private users. A good example where this principle is already in use is the Matrix protocol."
"Problem as always is, it's all talk and (almost) zero enforcement in Germany.Complaints to a data protection official take forever, are usually dismissed at first, even if counter to published opinions or decisions such as TFA. And only if you still care after a few years of waiting and at least one appeal you might get a decision, however usually a very cheap one for the perpetrator."
"Cross-compiling to different targets with `create-exe` command is a very intriguing idea.> In Wasmer 3.0 we used the power of Zig for doing cross-compilation from the C glue code into other machines.> This made almost trivial to generate a [binary] for macOS from Linux (as an example).> So by default, if you are cross-compiling we try to use zig cc instead of cc so we can easily cross compile from one machine to the other with no extra dependencies.https://wasmer.io/posts/wasm-as-universal-binary-format-part...> Using the wasmer compiler we compile all WASI packages published to WAPM to a native executable for all available platforms, so that you don't need to ship a complete WASM runtime to run your wasm files.https://wasmer.io/posts/wasm-as-universal-binary-format-part..."
"Can someone ELI5 what problem this solves? I think of WebAssembly as being a tool for getting code written in <random language> to run in a web client. Can't I already run code written in <random language> on a server that I control? Heck, PG went on at some length in one of his early essays about how that was one of the great things about the Web: you could use any language you wanted on the server. Even Common Lisp..."
"Can this be compared to something like αcτµαlly pδrταblε εxεcµταblε [1], that makes a single executable run on Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD?While wasmer seems to support more source languages, it requires running the executable under a WASM virtual machine. Are there other important differences?[1] https://justine.lol/ape.html"
"There’s an alternate way to save space: the discrete Hartley transform.https://news.ycombinator.com/item?id=27386319https://twitter.com/theshawwn/status/1400383554673065984 dht(x) = fft( (1 + 1j)*x ).real / area(x)**0.5 area(x) = np.prod(np.shape(x)) The dht is its own inverse, which is a neat trick. No need for an inverse fft. In other words, dht(dht(x)) == x.Notice that the output is the real component of the fft. That means the storage required is exactly identical to the input signal. And since the input signal is copied to the real and imaginary components (1 + 1j)*x, the intermediate representation size is also equal to the input signal size. (Just take sin and cos of the input signal.)The dht has almost all the same properties of the fft. You can blur an image by taking a hamming window and multiplying. You can resize an image by fftshift + pad with zeros + fftshift. Etc.In practice the dht seems to have all the benefits of fft with very few downsides. I’m amazed it isn’t more popular."
"The author links to another blog post called "A nice approximation of the norm of a 2D vector" [1], Recently I learnt from Youtube about the Alpha max plus beta min algorithm [2] which is a better approximation, it uses 1 max and linear combination to achieve 3.96% largest error compared to the approximation in the blog post which uses 2 max and has 5.3% error. I also wrote a comment on his blog so hopefully he will see it.[1]: https://klafyvel.me/blog/articles/approximate-euclidian-norm... [2]: https://en.wikipedia.org/wiki/Alpha_max_plus_beta_min_algori..."
"Little known fact: FFT is an O(nlgn) algorithm, but incremental FFT is O(n), meaning that if you want to compute FFT on the past n points of a stream of samples, it only takes O(n) to compute it for every new incoming sample."
"Someone linked [1] an interesting tool in the replies: MagicTile - Geometrical and Topological Analogues of Rubik's Cube - http://roice3.org/magictile/"If you want to try solving the Rubik's Cube this way, you should try @roice713's MagicTile. You can choose the Rubik's cube among hundreds of puzzles. The stereographic projection makes it look like the animation in this post."[1] https://twitter.com/mananself/status/1595132523264167936"
"As someone who as held national records in the sport of speedcubing, I don't feel this makes it easier to understand, at least if you are going for an instrumental understanding of how to solve it. I think the community are pretty good at teaching that now! But it does look extremely cool.My favourite one-liner for improving understanding is something roughly like 'solve areas made of pieces, not faces made of stickers'. Very much clearer when you have a cube to take apart and put together in the order you would solve them."
"Was trying to make sense of that solve, but couldn't see any algorithmic thing happening. Either that's the craziest method I ever saw, or it's just a reversal of a random scramble.Regardless, none of that takes away from the nifty 2D projection technique!"
"> We never thought our startup would be threatened by the unreliability of a company like MicrosoftYou're new to Azure I guess.I'm glad the outage I had yesterday was only the third major one this year, though the one in august made me lose days of traffic, months of back and forth with their support, and a good chunk of my sanity and patience in face of blatant documented lies and general incompetence.One consumer-grade fiber link is enough to serve my company's traffic and with two months of what we pay MS for their barely working cloud I could buy enough hardware to host our product for a year of two of sustained growth."
"Oof, that sucks and I feel for you. That said...> setting up in a new region would be complicated for us.Sounds to me like you've got a few weeks to get this working. Deprioritize all other work, get everyone working on this little DevOps/Infra project. You should've been multi-region from the outset, if not multi-cloud.When using the public cloud, we do tend to take it all for granted and don't even think about the fact that physical hardware is required for our clusters and that, yes, they can run out.Anyways, however hard getting another region set up may be, it seems you've no choice but to prioritize that work now. May also want to look into other cloud providers as well, depending on how practical or how overkill going multi-cloud may or may not be for your needs.I wish you luck."
"This is nothing new, Azure has been having capacity problem for over a year now[1]. Germany is not the only region affected at all, it's the case for a number of instance types in some of their larger US regions as well. In the meantime you can still commit to reserved instances, there is just not a guarantee of getting those instances when you need them.The biggest advice I can give is 1. keep trying and grabbing capacity continuously, then run with more than what you need. 2. Explore migrating to another Azure region that runs less constrained. You mention a new region would be complicated, but it is likely much easier than another cloud.1. https://www.zdnet.com/article/azures-capacity-limitations-ar..."
"Great potato story. Loosely related story here. My late grandfather in Belarus (the land of potatoes) loved potatoes just like everyone around. Well, he was an electrician at the concrete/brick making factory for entirety of his career, from right after the war until he retired in the late 80ies. The amount of electricity used to fire the kilns for bricks is well known to be huge. He said that they'd see the rats scutter along the power lines on the wall and every once in a while they'd touch the rails wrong and disappear in the scintillating disintegration. Oh, and he let me weld random junk together with his own cobbled together electrical welding kit when I was like 8, oh I loved that so much. I miss him."
"Ah I thought this would be a new take on that apocryphal bayesian story>An engineer draws a random sample of electron tubes and measures their voltage. The measurements range from 75 to 99 volts. A statistician computes the sample mean and a confidence interval for the true mean. Later the statistician discovers that the voltmeter reads only as far as 100, so the population appears to be 'censored'. This necessitates a new analysis, if the statistician is orthodox. However, the engineer says he has another meter reading to 1000 volts, which he would have used if any voltage had been over 100. This is a relief to the statistician, because it means the population was effectively uncensored after all. But, the next day the engineer informs the statistician that this second meter was not working at the time of the measuring. The statistician ascertains that the engineer would not have held up the measurements until the meter was fixed, and informs him that new measurements are required. The engineer is astounded. "Next you'll be asking about my oscilloscope"."
"Sometimes low-tech works best can litterly save your life. This kind of reminds me how you can start a fire with a watter bottle [1].Good job MTA @ diy.stackexchange.com for recommening this and jasonhansel @ HN for posting it.[1] https://www.youtube.com/watch?v=QwQJ-3pZfwc"
"Google locked our company out of using our own domain on Google Workspaces because a former employee had signed up for a Workspace account using our domain, and we had no way to recover it. We literally manage our domain in GCP but they wouldn't accept that we own our own domain and thus won't let us use our domain with Workspaces, which screws you out of a lot of Google functionality. Never seen a company that was too incompetent to allow users to use its own product and make zero attempt to solve the customer's problem.Google Cloud CEO Thomas Kurian said "We need scale to be profitable." No, you need to stop treating your customers like crap to be profitable. We don't trust you, you are a pain in our asses, and we have alternatives, so there is really no need for us to spend our money on you. It's now a matter of when we all move to AWS and Azure, not if."
"Wow I'm really sorry this happened to you. I do believe your conclusions are correct about AWS. Azure generally seems fine too.Google's aversion to customer service makes them extremely dangerous as a cloud provider in my mind. This also goes for other critical business services, like Gsuite (I know! It's convenient).If you have GCP or Gsuite now and you're trying to evaluate how big of a deal it is my suggestion would be to pick up a phone and attempt to talk with someone at Google about your account. This experience can act as a preview of what that process might look like when services are turned off.If you try to call Amazon on the other hand it feels like Jeff Bezos might hop on the call if things aren't going well."
"In the beginning of the year, I was recommended by a friend working at Google to test GCP using the tutorials on https://www.cloudskillsboost.google/Google banned my account while I was doing one tutorial. One step failed, the provisioning of something, and I had to redo a few things again. It succeeded but I had used more egress bandwidth than allowed to run the tutorial correctly so they banned me automatically a few minutes later.The support unbanned me after some time but I thought it was a sign to not use GCP yet."
"This humble, relatively small dev team has written their game on top of a barebones engine and ported it to more three platforms and architectures, yet multi-billon dollar studios claim supporting anything beyond x86_64 Windows is an impossible feat.I have so much respect for Wube. Absolutely amazing quality and care in everything they do."
"As a person who wants to play games on a Mac, sometimes I feel like Charlie Brown trying to kick the football. But Apple's custom silicon has me hoping again. I keep seeing more and more stories like this where ported games don't just work "acceptably", but actually work better on M1 and M2 chips.Apple's hardware is unquestionably very good now, and their graphics APIs are actually seeing some uptake. The recent stories about Resident Evil Village especially sound positive."
"I've played about 60 hours of Factorio on the Nintendo Switch version, which came out 4 weeks ago. I'd never played it before.This comment has very little to do with Apple Silicon, except to say that I imagine its a faster platform than the Switch, and that Factorio is addictive fun. Here's hoping the promise of the post turns into reality."
"So basically you run an endless script to fetch https://www.tesla.com/sites/default/settings.php and hope that some day there will be a minor nginx config error which lets you download the php source instead of executing it.This will happen some day, so invest 5 bucks per month to exploit Tesla at a certain point, so maybe you can be first in line for the Cybertruck :-)"
"Interesting, the exclude file (actually, everything under .git/info) 403s, while .git/index is a 404.- https://www.tesla.com/.git/info/exclude- https://www.tesla.com/.git/indexREADME.txt 403s too. https://www.tesla.com/README.txtedit: just going to add files I've found here:- https://www.tesla.com/.editorconfig- https://www.tesla.com/profiles/README.txt"
"A companies marketing website and their actual products have little in common. I would be surprised if any engineers even work on the marketing website and blown away if it is co-located with something sensitive."
"Well I certainly didn't have "Tumblr becomes culturally relevant in 2023 after adding federated content cross-compatibility with foss platforms" on my bingo card, but here we are I guess."
"This reminds me a lot of OpenID over a decade ago. The idea was anyone could setup their own identity provider on their own domain and login anywhere. Unfortunately it became "login with google/facebook" which is a real shame. I hope sites don't restrict ActivityPub usage to only a few big players."
"Can you imagineYou log into your Tumblr dashboardAll your friends on Mastodon making posts, your favorite bloggers on Write.as/WriteFreely instances, your favorite artists and photographers on PixelfedMingling with the people you follow on Tumblr. And it's all neatly tied together through ActivityPub.This was the original promise of the tumbleblogs Tumblr was inspired by. https://kottke.org/05/10/tumblelogsI see people worry that it'll be quickly blocked the moment it steps on to the Fediverse. The matt in Automattic lived through the same decades of web and internet we all did. I hope he has the good sense to slow walk it and make sure Tumblr is a good citizen of the Fediverse rather than flipping a switch one day to open the floodgates."
"This is a wonderful article. Thanks for sharing. As always, Cloudflare blog posts do not disappoint.It’s very interesting that they are essentially treating IP addresses as “data”. Once looking at the problem from a distributed system lens, the solution here can be mapped to distributed systems almost perfectly.- Replicating a piece of data on every host in the fleet is expensive, but fast and reliable. The compromise is usually to keep one replica in a region; same as how they share a single /32 IP address in a region.- “sending datagram to IP X” is no different than “fetching data X from a distributed system”. This is essentially the underlying philosophy of the soft-unicast. Just like data lives in a distributed system/cloud, you no longer know where is an IP address located.It’s ingenious.They said they don’t like stateful NAT, which is understandable. But the load balancer has to be stateful still to perform the routing correctly. It would be an interesting follow up blog post talking about how they coordinate port/data movements (moving a port from server A to server B), as it’s state management (not very different from moving data in a distributed system again)."
"Whenever I see the name Marek Majkowski come up, I know the blog post is going to be good.I had to solve this exact problem a year ago when attempting to build an anycast forward proxy, quickly came to the conclusion that it'd be impossible without a massive infrastructure presence. Ironically I was using CF connections to debug how they might go about this problem, when I realized they were just using local unicast routes for egress traffic I stopped digging any deeper.Maintaining a routing table in unimog to forward lopsided egress connections to the correct DC is brilliant and shows what is possible when you have a global network to play with, however I wonder if this opens up an attack vector where previously distributed connections are now being forwarded & centralized at a single DC, especially if they are all destined for the same port slice..."
"> However, while anycast works well in the ingress direction, it can't operate on egress. Establishing an outgoing connection from an anycast IP won't work. Consider the response packet. It's likely to be routed back to a wrong place - a data center geographically closest to the sender, not necessarily the source data center!Slightly OT question, but why wouldn't this be a problem with ingress, too?E.g. suppose I want to send a request to https://1.2.3.4. What I don't know is that 1.2.3.4 is an anycast address.So my client sends a SYN packet to 1.2.3.4:443 to open the connection. The packet is routed to data center #1. The data center duly replies with a SYN/ACK packet, which my client answers with an ACK packet.However, due to some bad luck, the ACK packet is routed to data center #2 which is also a destination for the anycast address.Of course, data center #2 doesn't know anything about my connection, so it just drops the ACK or replies with a RST. In the best case, I can eventually resend my ACK and reach the right data center (with multi-second delay), in the worst case, the connection setup will fail.Why does this not happen on ingress, but is a problem for egress?Even if the handshake uses SYN cookies and got through on data center #2, what would keep subsequent packets that I send on that connection from being routed to random data centers that don't know anything about the connection?"
"Asahi is getting closer and closer to "daily driver" usability at an amazing pace.Anyone have an idea how soon we should expect GPU support to be in mainline?"
"Question for those in the know: are there any substantial changes from M1 to M2? I'm sure lots of tuning took place, but is there any major component that was completely overhauled?"
"Sad that iPads do have open bootloaders. Id be happy using my M1 iPad Pro from time to time."
"> 99.9% of websites on the Internet will only let you create one account for each email address. So if you want to see if an email address has an account, try signing up for a new account with the same email address.This is not true if the signup flow is implemented correctly. Signing up for an account should always respond with the same message "we sent an email for you to confirm your account signup". The owner of that email address then receives an email - either 1) normal signup process, or 2) "did you just try to sign up? You already have a valid account for this email address."This way you cannot tell via the signup web form alone whether an account exists or not. You need to have access to the email address."
"But the error message can be true. If you mistype your username, you might have entered another, existing username. Just telling the user 'wrong password' will mean they are less likely to check that the username was correct.The website doesn't always know which one you got wrong, and assuming one way or the other just makes things worse."
"I don't get it. It's fine to leak/allow user enumeration on the login page because it's leaked elsewhere anyway? That's a pretty big assumption.One way to allow users to register using their email address without leaking any information is to just say "user created, please check your inbox to confirm your email address" or something like that. If the user already exists, swap the confirmation email for a warning email if they already have an account.Or am I missing something?"
"When I used to work for a large mobile network, we would spend ages testing the firmware of new devices. Even giants (at the time) like Nokia would release phones which couldn't dial the emergency services, or put out more than the legal limit of radiation, or were in other ways defective.We'd test, send a report back, wait a few weeks, get a new firmware, test again, repeat until everything worked. It took months. That was fine when phones weren't expected to be updated by users.But when more modern phones arrived with flashable firmware, customers couldn't stand the delays associated with testing. They'd see a new firmware had been released and complained that the mobile network operators were delaying progress, dragging our feet, deliberately depriving customers of something cool.The fact was, operators very often didn't certify the firmware because it contained *dangerous* bugs. I'm sure there was also a cost element - why pay to re-test a phone that you're no longer selling? - but that wasn't the primary driver.Well, the manufacturers and customers "won". If you buy a phone through your network, it probably has a network-certified firmware blob. If not, you're at the mercy of the manufacturer."
"This article, dating back to January, lays it out pretty clearly: https://arstechnica.com/gadgets/2022/01/google-fixes-nightma...> If you're logged out, launching Microsoft Teams 10 times will result in 10 duplicate PhoneAccounts from Teams clogging your phone. Teams shouldn't do this, and Microsoft's update stopped Teams from doing this, but a bunch of duplicate PhoneAccounts also shouldn't be enough to bring Android's phone system to its knees.> Next bug: when picking a PhoneAccount to run the emergency call through, [...] it's possible for this to result in an integer overflow or underflow, and now the phone subsystem is going to crash.> A third bug in this mess is that Microsoft Teams does not even register itself as an emergency call handler.> An update is not arriving for the Pixel 6 yet. Google's newest flagship is going though a bit of an update crisis at the moment. The December 2021 update was pulled due to unrelated "mobile connectivity issues" (phone calls don't work). While Google scrambles to fix everything, the next Pixel 6 update with this 911 fix is due in "late January." Until then, it's normal to be on the November patch. Both of Google's "early January" and "late January" patch timelines seem incredibly slow for a bug that could cause users to literally die.If the OP article is correct, then apparently this still hasn't actually been fixed yet."
"I'm all over the place with pocket computing devices and remain confused. I once has an iphone. Beautifully crafted, both inside and out, but so frustratingly locked down that I was desperate to leave it behind. They have progressively opened up the platform, but it always lags behind (I remember I couldn't access the cloud storage from anything other than a mac).My favourite devices have been more alternative. E90 running symbian, which was fairly 'open' for its time - I could install any software, proper multitasking. The N900 also, full linux system, great phone. But then the apps I find useful, are often not available for that platform.At the moment I'm on Pixel, which has been a good balance between being well supported, while still fairly open. I can sideload apps, run a linux distro in the form of termux. As a bonus the camera is great. I have to remind myself that while it might not quite compare to iphones in terms of refinement and hardware, I do at least have more freedom on the platform, and it's easy to take that for granted until you lose it."
"I recall from my time in Google Geo years ago that the idea of integrating Search and Maps was a big part of the "New Maps" release that happened around 2014. The rumor I heard was that someone (possibly even Larry himself) wanted to be able to have interactive maps directly on the search results page, so that the navigation from a search query to a map wouldn't involve even a page reload. So the big Maps frontend rewrite actually ended up merging MFE into GWS, the web search frontend server. I recall seeing maps hosted at google.com/maps around that time, but I don't know if that was ever launched fully or if it was just an experiment.In any case, though, my understanding is that the technical capacity for this has existed for nearly 10 years now, just behind a configuration setting. So it's possible that this change is just a code cleanup. It's also possible that someone is trying to increase the percentage of searches that have location information, that doesn't seem terribly far-fetched either, and I can imagine lots of ways people could try to rationalize it as actually benefiting users. (Whether it actually does benefit users is of course debatable.)"
"This is a fantastic example of motivated reasoning. This "change" (which apparently isn't even new) can have so many different reasons, some of which are less harmful and some of which are probably worse (privacy-wise) than the one mentioned here. There is no indication that re/mis-using permissions is specifically what they wanted to do here, there is also no example of them doing it right now. Don't get me wrong, there is also no evidence that this isn't the real reason and that they wouldn't do that in the future. But the blog post basically list a single symptom and jumps right to the one conclusion that fits what the author expects."
"Funny thing is, it depends on your threat model.Using google.com/XXX for all its services protect the user from being spied by external actors such as ISP because everything is hidden behind HTTPS.Whereas, with XXX.google.com, external actors knows that you are using service XXX."
"https://archive.ph/YMyrHhttps://web.archive.org/web/20221124034213/https://www.washi...Note: Some parts of the archive don't render correctly? Unsure why. You will get better results clearing cookies + localstorage for washingtonpost.com."
"Amazon is the third largest advertising company after Google and Meta. Its ad revenue is $32B (and growing fast, the run rate is $40B). That is half the revenue of AWS, which is worth 70% of Amazon's market cap. The inescapable conclusion is that Amazon's advertising is worth the remaining 30% of Amazon's market cap and Amazon's e-commerce arm is deemed worthless by Wall Street, its only purpose being to support the advertising business, just what Google Search is to Alphabet.Think on that for a moment. The other inescapable conclusion is that whenever the quality of the shopping experience on Amazon and the needs of Amazon's advertising business clash, advertising will win (just as it has on Google). That's an even more foregone conclusion since Andy Jassy took on the top job, he's from AWS and owes no special allegiance to the historical e-commerce business."
"My process for getting usable results on Amazon (.co.uk, presumably the others too):1. Search for thing2. Filter by department (necessary for 3)3. Filter by Seller: only Amazon4: Filter by reviews: 4 stars+5: Sort by price, Low > High6: (Further filters as appropriate)7: Look at only products with a high number of reviews8: For every product, "See all reviews" and filter on "Verified purchase only" and "Show only reviews for {the product variant you're actually looking at}". Closely scrutinise 1 and 2 star reviews.But sometimes even this _still_ doesn't get me quite what I want, because when an item is sold both by Amazon and a 3rd-party it can be sorted based on the non-Amazon price.It does feel just a little like Amazon's goals might not be perfectly aligned with those of the customer."
"This is my favorite community on the internet. It's the only one I don't feel gross about after spending time on it.Shout out to all the talented people here and for @dang for keeping things in check."
"It's an understatement to call HN a daily read for me. Does anyone else check the comments before following the link? The perspectives shared here are a valuable part of my information diet. Thank you and Happy Thanksgiving!"
"When I had children, a family of my own, I came to find Thanksgiving my favorite holiday. Seemingly immune to the commercialization (I'm going to disassociate Black Friday with Thanksgiving), it became for me a day to relax, hang out with the family and ... be thankful.How pure and unencumbered is that?Best thing the U.S. has come up with. (Landing on the Moon was cool too though.)"
"Can also just buy a bag of 'em: https://www.techtoolsupply.com/RJ-45-Quick-Plug-Easy-Repair-...I'm still working my way through the 50-count bag I bought in 2018; evidently I don't actually encounter that many broken cables, they just exert an outsized effect on my psyche when I do."
"Heh. The first thing I do with a new ethernet cable is to break the clip, laptop side. It's orders of magnitude better to have an occasional disconnection than to trip with the cable and make the laptop fall from the table.I did not invent this. I've seen legendary graybeards do it. I see it as a rite of maturity, like cyclists who discard the caps of their presta valves."
"So the 8 pin modular connector(rj-45) may not be the best connector in the world(the 8088 sas connector is a serious contender for that honor), but it does one thing better than any most connectors, it is designed to be field terminated, and as such is easy to fix. the crimping dies are ubiquitous, the process is simple. because of this single fact, I think it is better than just about any other connector in widespread use. because you can fix the infernal thing.As such the article left me a bit confused, why not just cut off the end and putting a new plug on? with an 8 pin connector this is very easy. But I am in the industry and tend to have a crimper close to hand. perhaps some are not as fortunate."
"Hi, author here. Happy to answer questions.Version 0.8 just got out[1]. For the next one, I'll try to focus on making the codebase fully ready for multi-entities and introduce a Project Board. Later we can add support for code review!If you are looking for how it works, you can have a look at the data model introduction[2].git-bug is only pushed forward by volunteers so it's taking its time to fully grow, so I'll take the opportunity to welcome everyone to join the fun :-)[1]: https://github.com/MichaelMure/git-bug/releases/tag/v0.8.0[2]: https://github.com/MichaelMure/git-bug/blob/master/doc/model..."
"Around 2010-2015 there were a whole bunch of these distributed git bugtrackers and none of them took off. Most are completely dead. IIRC one of the biggest problems was its feature: issue state being distributed as it is, it was very easy for developers to end up with issues having different states/comments, and depending on the implementation even differed across local branches. But also in a work environment there was no place for project managers to view/create/update issues.I remember listing out 4-5 of these in the past when the topic has come up, but the only old one I can remember/find right now is https://github.com/dspinellis/git-issueEdit - Found some more using duckduckgo instead of google:* https://github.com/jashmenn/ditz (last update 12 years ago)* https://bugseverywhere.org/ (I think this was the most popular at one point)* https://github.com/marekjm/issue (this is separate from git-issue above)* A blog post from 2012 that lists these and a few others (half the links are dead): http://www.cs.unb.ca/~bremner/blog/posts/git-issue-trackers/From the blog post, the additional ones where the links still work:* https://github.com/chilts/cil (last updated 11 years ago)* http://syncwith.us/sd/ (last updated 6 years ago)"
"I understand why the bug IDs are hashes, but that's going to be pretty inconvenient for practical use. Yes I know we manage it with Git commit names, but bug IDs are printed and spoken much more than commits, e.g. when communicating with a test team, management, or even in release notes.I wonder if we could use some sort of distributed naming scheme for this, similar to Blockchain DNS?"
"Valve's continual focus on Linux (SteamOS 1.0 was released eight years ago) is honestly incredible, Proton even sometimes works better than native Linux builds. Truly nobody else (in the gaming space) is doing it like Valve are. I saw a talk[1] from a KDE dev talking about features Valve sponsored to be added to KDE Plasma and it's things that are useful for everyone outside the context of the steam deck.The only thing that doesn't really work I've noticed is when games have an online component, whether it's like easy anti cheat which I've heard should be just flipping a switch to enable but I haven't seen anyone actually do that, or some weirdness happening with whatever the new Microsoft Flight Simulator is doing that makes it seemingly a 50/50 coin toss as to whether it'll run with the exact same settings.[1] https://www.youtube.com/watch?v=a0gEIeFgDX0"
"Proton was instrumental in my move from Windows to Linux. With over 400 games in my back catalog, I didn't want to lose thousands of dollars not to mention thousands of hours on games which I enjoy.Thus far, 92% of games have ran flawlessly for me on Fedora (4 failures in the last 50). The main issues have been related to games that have kernel mode drivers for copy protection or other exotic types of anti-cheat. Perhaps most amazing (to me anyway) is the fact that mods and workshop items work perfectly. In most cases, I could pop open a saved game and continue on Linux, custom mods included.Performance-wise, I haven't noticed a difference but I generally run very modern hardware which works better with the DirectX to Vulcan implementation. I also swapped to an AMD chip/gpu when moving to Linux and I think that removed the headaches that people often have with Nvidia drivers. Overall, it's been fantastic."
"I had to go figure out what Proton is:"Proton is a tool for use with the Steam client which allows games which are exclusive to Windows to run on the Linux operating system. It uses Wine to facilitate this."https://github.com/ValveSoftware/Proton"
"I say this as someone who has been heavily using the command line for the last decade, even if you "know" how to use a CLI decently well, go read this if you haven't. From only a couple minutes of reading I found not one, but two new tidbits that I never even considered looking up. This information will completely change my daily levels of frustration when using a CLI. Very, very high ROI link."
"I’ve been using the command line for almost 3 decades. This is really great! I found it covers basically everything I ever had to care about. Strikes a good balance in level of detail. Well done!"
"Please, can anyone provide guidance for making Win10 CLI UX tolerable? After more than 2 decades on macOS, very comfortable w/ customized zsh in iTerm, I'm now unavoidably working in Windows and hating it. Sad to discover my vague perception of Windows as a 2nd-class citizen (or 3rd-world country) is all too accurate. Running git-bash in Terminal, surfacing any meaningful git status in my $PS1 incurs multisecond latency. Surely there's a better way. Right?"
"Still use last.fm. Most "You might like this" algorithms go straight for the low-hanging fruit, and often fail to take any kind of nuance into account.I can't tell you how many of the streaming services will see my Black Sabbath play history and immediately recommend, "If you like Black Sabbath, you should love...Slipknot!" But I've never had a real person make that mistake, because a real person who looks at my last.fm history and has an understanding of the genre says "Gee, this guy has plenty of Black Sabbath, Iron Maiden, and tons of doom metal on his list, but doesn't have Slipknot, Korn, or Pantera in his history. Maybe that's intentional."Human review and recommendation still beats algorithmic recommendation by a mile if you have discerning tastes."
"Last.fm died (as in for me personally) when I stopped cultivating my own music library. I used to have gigabytes of MP3s and FLACs, all neatly organised into folders, usually by artist/album, and meticulously maintained ID3 tags. All played through software like Winamp with the audioscrobbler plugin, or iTunes when I had my beloved iPod classic.All of that drifted away as I got older, and the dawn of streaming services like Spotify came onto the scene. I'm not sure where my music is now, probably on a hard drive somewhere, dumped amongst other junk.I think spotify used to come with last.fm support but I think I just lost interest in the whole thing, I don't consume music in the same way as when I was a younger man.EDIT: Just logged into my last.fm account and it looks like the scrobbling still works from spotify, so it's been scrobbling all this time, probably for 10+ years without me logging in!"
"Contrary to popular belief Last.fm never really died (and I hope it never will), however it lost years ago its most valuable thing: *its own streaming service*.It was just perfect at everything, was it finding new releases, discovery obscure gems, or play your favorite things all over. It was so magical that you could hardly believe it was computer generated.Also, it was a wonderful "sane" social network where music and only music was at its heart: no vanity metrics (eg: counting likes) / vanity egos (eg: influencers) or purely material interests (eg: make money from this or that).Anyway, it's sad it lost the streaming war pretty soon. Maybe it was just too genuine to compete with services driven by dark patterns, suspicious agendas, and mostly, greed.Wait, is this a metaphor of the old internet versus the present state of digital affairs? Or am I just getting nostalgic here?I don't know. Long live Last.fm!111,920 Scrobbles from 9,137 Artists since Jul 2006."
"It's a liquid-sodium-cooled, graphite-core system with no need for pumps, aka 'passively cooled' via heat [edit] pipes. Output is:> "The microreactor can generate 5 MW of electricity or 13 MW of heat from a 15 MW thermal core. Exhaust heat from the power conversion system can be used for district heating applications or low-temperature steam."Control systems are kind of interesting:> "The only moving or mechanical parts in the reactor system are reactivity control drums, which manage the power level and allow absorber material to passively turn inward toward the core if power demand is reduced or lost, and turn a reflector material toward the core if demand increases automatically. Hence the term “nuclear battery.”"I'm generally not a nuclear advocate but if they've really managed to eliminate the need for active cooling, and have a robust system that can safely shut down with concerns about meltdown even without external power, that's a pretty big advance. Looks remarkably promising... keep your fingers crossed. (New nuclear tech hasn't had the greatest track record over the past several decades, i.e. pebble beds didn't work out etc.)"
"If the price is right and medium size communities could get their acts together, something like this could potentially disrupt the entire grid model. In California, regardless of what wholesale electricity costs, the retail cost is something like $200/MWh more than could be considered reasonable. Put another way, the utility (PG&E) is charging an immense premium. Normally, displacing PG&E would be impractical:a. The actual transmission system is a phenomenally large capital investment developed over many decades. You can’t just VC up a new electric grid in a developed area. And the incumbent mostly owns the existing infrastructure.b. Regulation, good and bad.It’s possible to sell power to the utility for a reasonable price per MWh. But one can’t easily sell to the utility’s customers.But this reactor is small! 5 MW could serve maybe 1000 expensive homes in an expensive area without an enormous transmission system. Anyone trying to disrupt the incumbent utility with something like this has $200/MWh of inefficiency to exploit. $24k per day of operation will offset a decent amount of capital cost and regulatory effort to get the electricity to customers.Put another way, a wealthy community could buy a few of these, figure out local distribution, and ditch the incumbent utility. This could be fantastic."
"Northern communities in Canada are a perfect application of this tech. Currently, they mostly burn diesel for power."
"Most of the OP is quotes from the article which discussed a few days ago here:Why Meta’s latest large language model survived only three days online - https://news.ycombinator.com/item?id=33670124 - Nov 2022 (119 comments)"
"I replied to LeCun's claims about their latest protein structure predictor and he immediately got defensive. The problem is that i'm an expert in that realm and he is not. My statements were factual (pointing out real limitations in their system along with the lack of improvement over AlphaFold) and he responded by regurgitating the same misleading claims everybody in ML who doesn't understand biology makes. I've seen this pattern repeatedly.It's too bad because you really do want leaders who listen to criticism carefully and don't immediately get defensive."
"As a bright-eyed science undergraduate, I went to my first conference thinking how amazing it would be to have all these accomplished and intelligent people in my field all coming together to share their knowledge and make the world a better place.And my expectations were exceeded by the first speaker. I couldn't wait for 3 full days of this! Then the second speaker got up, and spent his entire presentation telling why the first person was an idiot and totally wrong and his research was garbage because his was better. That's how I found out my field of study was broken into two warring factions, who spent the rest of the conference arguing with each other.I left the conference somewhat disillusioned, having learned the important life lesson that just because you're a scientist doesn't mean you aren't also a human, with all the lovely human characteristics that entails. And compared to this fellow, the amount of money and fame at stake in my tiny field was miniscule. I can only imagine the kinds of egos you see at play among the scientists in this article."
"Time is a flat circle, eh?But all kidding aside, web directories should be much more powerful now than in the 90s. Websites have RSS, and directory websites should be able to automatically monitor things like uptime, and leverage RSS to preview a site's most recent post.I've considered maintaining my own directory on my personal website (a one-way webring if you will), but always stopped because the sites I linked to either died, or were acquired and became something very different."
"I love to see this. The death of blogs and RSS is highly exaggerated. The idea that Google "killed" blogs by killing Google Reader is a meme that is more destructive than Google's act in itself.There are countless healthy and active blogs that you can read via RSS. There are great RSS reader apps.For us technically-minded folks we need to keep being proactive about helping people read the web via RSS, improving discovery, and continually making RSS a first-class option on sites we build."
"Bookmarked. Will revisit.Anyone else notice everything old is new again? Neocities[0], Marginalia Search[1], Project Gemini[2], etcThere's many others I'm forgetting, and new ones popup on Hackernews each week.Is this just basic nostalgia, people wanting to recreate the dial-up days or even BBS days?[0] https://neocities.org/[1] https://www.marginalia.nu/[2] https://gemini.circumlunar.space/"
"As someone who only needs to use Flexbox/Grid every once in a while, this is precisely what I needed.I've been struggling with static documentation like the one from Tailwind [1] or MDN [2]. Writing good and intuitive documentation is hard, surely this must have been quite an effort.[1]: https://tailwindcss.com/docs/flex-direction[2]: https://developer.mozilla.org/en-US/docs/Learn/CSS/CSS_layou..."
"I love flex, it's made CSS so easy. One of the things that improved my flex usage was using Penpot to draw my designs before implementing them.They have alignment properties for graphical elements that work like flex's justify-content and align-items properties so once you design a view in Penpot it becomes almost trivial to translate it into HTML/CSS using flex.It really changed my mindset from working with relative or absolute positioning, blocks, margins, padding, etc... to simply working with flex everywhere. And it's responsive automatically!"
""I like to think of CSS as a collection of layout modes. Each layout mode is an algorithm that can implement or redefine each CSS property. We provide an algorithm with our CSS declarations (key/value pairs), and the algorithm decides how to use them."Brilliant. The way basic CSS properties are taught often ignores the layout mode. Even MDN does not mention that "width" is a CSS property that is not always respected when "display: flex". Making this distinction more prominent would reduce the amount of confusion/frustration when certain CSS properties appear to "not work".[1] https://developer.mozilla.org/en-US/docs/Web/CSS/width"
"Coppicing, Hedge laying, Bocage, drystone walling, wattle-and-daub are all domestic comparable ancient crafts of Europe. The point being that probably only drystone walling is valued in a way comparable to the Japanese version of Coppicing, which really has been transformed into an artform. European coppices are cut close to the rootstock and cut down far younger for use as poles, for wood turning, for hedge laying.Timber framed construction in Europe was nailless (wooden tree nails permitted) but the mortice and tenon joinery of Japan is in another league. Maybe European Gothic cathedral roofs come close, little else would.Japan modernised in the modern era, it's industrial revolution was comparatively recent and it remained feudal far longer than Europe (Russian serfdom aside)There are probably more continuous family heritage firms in Japan practising some art (brewing, soy sauce, woodwork, coppicing) than anywhere else. Can you name a European family concern doing the same thing continuously since before 1600? I can't name any Japanese ones but I wouldn't be surprised if there were many. Institutional enterprises like Oxford university press exist since deep time, but in Japan it would be a continuous lineage of printers continuing to use woodblock printing (maybe alongside hot type or photo typesetting)Farming does remain in the family but European farming practices have modernised since forever."
"Everyone here is mentioning coppicing, so I suspect there will be some interest in the Low Tech Magazine articles on the subject:https://www.lowtechmagazine.com/coppicing/And since we're talking about doing cool things with trees, I just wanted to mention that LTM has more interesting articles slightly "adjacent" to this topic, like this one about a half-forgotten technique for growing citrus trees in climates with freezing temperatures:https://www.lowtechmagazine.com/2020/04/fruit-trenches-culti..."
"I wish there was a way to apply this technique to some of the rarer woods that have fantastic uses in production, like teak, rosewood, and grenadilla.If you're into hardwoods like musical instruments or fine furniture there's an appreciation you grow for the character of these woods. And a moral quandary with the sourcing of it. It seems impossible to find a sustainable method to source the material. A lot of what we make today will seem impossible in the coming decades.Grenadilla trees in particular are suffering due to over harvesting and poor oversight in the markets where it is sourced. It is prized for woodwind instruments - and the day is coming where it's only going to be economical to use recycled polymer composites (which have many benefits besides commercial) over true solid wood instruments. If we could sustainably turn these trees into fruit trees harvested over centuries it would be a great service to nature and the industry."
"Is there a good explanation of how to train this from scratch with a custom dataset[0]?I've been looking around the documentation on Huggingface, but all I could find was either how to train unconditional U-Nets[1], or how to use the pretrained Stable Diffusion model to process image prompts (which I already know how to do). Writing a training loop for CLIP manually wound up with me banging against all sorts of strange roadblocks and missing bits of documentation, and I still don't have it working. I'm pretty sure I also need some other trainables at some point, too.[0] Specifically, Wikimedia Commons images in the PD-Art-100 category, because the images will be public domain in the US and the labels CC-BY-SA. This would rule out a lot of the complaints people have about living artists' work getting scraped into the machine; and probably satisfy Debian's ML guidelines.[1] Which actually does work"
"I am a solo dev working on a creative content creation app to leverage the latest developments in AI.Demoing even the v1 of stable diffusion to the non-technical general users blows them away completely.Now that v2 is here, it’s clear we’re not able to keep pace in developing products to take advantage of it.The general public still is blown away by autosuggest in mobile OS keyboards. Very few really know how far AI tech has evolved.Huge market opportunity for folks wanting to ride the wave here.This is exciting for me personally, since I can keep plugging in newer and better versions of these models into my app and it becomes better.Even some of the tech folks I demo my app to, are simply amazed how I can manage to do this solo."
"In addition to removing NSFW images from the training set, this 2.0 release apparently also removed commercial artist styles and celebrities [1]. While it should be possible to fine tune this model to create them anyway using DreamBooth or a similar approach, they clearly went for the safe route after taking some heat.1. https://twitter.com/emostaque/status/1595731407095140352?s=4..."
"It's nice but could you make it so that the rotation direction can additionally be toggled by holding down shift while clicking? I find it very irritating to always move the mouse to the bottom right corner to select the rotation direction - especially since "every click counts".Other than that, I first thought that some of the tiles should start with correct orientation but soon realized that would make this puzzle just really easy, because our brains are just excellent at seeing the correct image from just a few correctly oriented tiles I think."
"I like how well this specific picture works for this puzzle. The coastline lines up perfectly with a square border. The cat's face is sideways. The two steps are the same color, so the rotation could be in one of two possible positions. It makes it just tricky enough to make the right answer not immediately obvious, but it's not so hard that there's any chance for being frustrated. Plus, it's a kitty."
"Implement right click for the other direction?"
"I can attest to this and took all of my notes on paper in college. However, once I started a real job I realized that this strategy doesn't scale to all situations. In college, I needed to be able to recall all of the information I had ingested: it was low-write, high-read. In the workplace, there's much more information, but I'm unlikely to need most of it: it's high-write, low-read. I need to be able to reference the information, but not necessarily recall it. Taking paper notes became too much of a burden and I moved to a wiki of markdown notes."
"If you're in charge of other people, it's worth noting that some very common cognitive problems like ADHD, Dysgraphia, and Dyslexia negate these benefits in some affected people. The cognitive load of making legible marks can become high enough to become the focus, rather than the actual content. Pressuring someone already struggling with working memory to do things like this, is counterproductive, if not demoralizing. Work style advice is great, but make sure you listen if they say it doesn't work for them rather than getting into the "it worked for me so you must be doing it wrong" mindset."
"I'd be curious if anyone had good advice on how to improve your handwriting ability well into adulthood (I'm 35). My penmanship was so bad in grade school that I attended special education classes to improve it, but it still was and remains horrible. This is a source of insecurity for me and since I've always been glued to a keyboard it has been easy to handwave away as "screw this, the world is all typing-based anyhow".But I have seen evidence before that handwriting notes leads to improved retention, and seeing it here now, I'm wondering if there's a framework or resource that can help me feel a little bit more confident in my ability to, you know...write words with pen and paper. It's embarrassing even talking about it, honestly."
"My wife and I are over 70. We live in the UK.It's fun to generalise, but not always helpful. No, we don't watch anything like 6 hours of TV a day. We have our evening meal, like every meal, at a table. We mostly watch one thing a day on streaming, usually well made fiction. We read news on our phones but never watch it on TV because of all the uninformed comment.Maybe we are not typical, but we do exist."
"The question is not now much time you spend at a screen, but who is on the other side of it.Much of the conversation so far concerns time, and the virtues or vices of how we spend it. Not all pastimes are equal. Knitting a jumper, taking a hike, or skateboarding are actions one performs on or in the world. Reading a book is more of an action that the world (the author especially) performs upon you. It is a different frame. Movies and video gaming are somewhere in the middle. Some media forms, such as daytime trash-TV and TiKTok are at the extreme of the passive/receptive frame. It is a pipeline of affect directly to your hypothalamus. Any discussion of harms or benefits must be understood in that light."
"My father used to read the dead-tree newspapers every morning. I'm now reading them online. Is it fair to count that time as an increase of my screen time?I guess there are some similar examples, like looking for cooking recipes in a book vs online, or paper-encyclopedia vs wikipedia."
"This website was dying already, ie. not being updated since Julian was imprisoned, but still a massive treasure trove for journalists and investigators. It is a very worrying sign of the times that it is vanishing, as the state powers seem to be fighting back against democracy and independent journalism."
"Apropos of WikiLeaks itself, does anyone remember "information wants to be free."As years go by, the www is decreasingly capable of disseminating information without monopoly, government or some other official backing.Basic web technologies are designed to make documents available. This should all should be trivially achievable using early 90s computers. Maintenance/sysadmin shouldn't be a major hurdle.Yet... Here we are."
"I don't understand the level of JA/Wikileaks hate - reading the comments it seems to be mostly American ( DNC leaks related ).I do wonder, when the party in power changed, but the damaging leaks didn't - it started to become far too apparent that the problem wasn't Bush or the republicans, the problem was America - irrespective of who was in charge.Maybe that's too hard for many American's to accept, hence the shooting of messenger instead, rather than facing up to the uncomfortable truth - that America's vision of itself, doesn't reflect reality."
"> I may be getting something wildly wrong here, but I am not sure I see the presence of this Apple ID proxy in Apple’s services logs to be a violation of either its own policies or users’ expectations for using internet services in general.I strongly disagree that the iOS App Store should be treated as an "internet service" rather than a part of the device. The iOS App Store only comes on iOS devices, it comes on all iOS devices, and it is the only way to access a crucial feature of the device. It is, for all meaningful purposes, part of the iPhone in the same way iOS is.It would be a bit like Microsoft saying "explorer.exe? Policy A only covers the OS, and that is clearly not part of Windows! - so therefore you are covered by Policy B". While Apple may be legally in the right, I strongly believe they are morally in the wrong and have betrayed the trust their users put in them to safeguard their privacy.I believe that a casual user of the iPhone would take a look at Apple's iPhone privacy policy and expect that to apply to the iOS App Store as well, as for all intents and purposes that is a part of the iPhone."
"Some trivia: the "DS" in DSID is "Directory Services", which is a giant Apple-internal database. Apple employees and contractors have a DSID too. It's basically a database of all people that Apple knows, and it's very old."
"Can someone explain why the App Store doesn't show the "Ask App Not To Track" dialog?Why do 3rd party apps have to ask for permission to track, but Apple's apps do not?"
"> Not many people want to trust an AI with spending their money or buying an item without seeing a picture or reading reviews.I wonder how much the decline in quality of Amazon's marketplace has affected other parts of the company's business including Alexa. I remember a time when, if I wanted to buy something online, I'd just buy off Amazon without shopping around. They almost always had the best price, their shipping was fast, and I trusted the quality of what I received. In that world, I could see myself saying my shopping list out lout to an Echo to make things easier.But in Amazon's current free-for-all marketplace, I would never do that. Now I typically check traditional big box stores first, and only go with Amazon if I can't find what I need elsewhere."
"Perhaps a silly question, but can somebody describe how they might lose $10 billion on Alexa in one year? I don't understand the math. That's enough to pay 33,000 employees $330k a year, or some recombination of such. I don't think the real numbers come even remotely close to that.So it presumably has to be hardware costs, but Alexa is software, and they have vertical integration of all servers which presumably would drive voice processing costs to negligible levels as well (and that's assuming 0 on-device caching/learning ability). And this is all assuming that the gross revenue generated by Alexa is $0, which also certainly isn't true. So I don't see where the numbers are coming from?"
"I use my Alexa all the time, but 99% of it is:Alexa, time! Alexa, set an alarm for 5 minutes! Alexa how many minutes left? Alexa, turn off/on all the lights!That’s basically it. I wrote some apps for it a long time ago to do custom stuff like read me some Reddit pages, but the SDK changed or something and eventually they just died and it wasn’t obvious to me how to recreate it/not worth the effort. I really wish there was an easy way to just put python scripts onto the device or something. The process of going through Amazon is pretty unnecessarily complex and annoying.I know there are ways of doing this stuff with raspberry pi, but also: not really worth the effort. If I cared that much I’d just make a PWA for my house and give that to my wife and kids."
"My current employer was sold to me as a "high documentation" place. What it means in practice is that if you're trying to do something there are 5 outdated documents describing the decision making process for how the project was run, and no documents about how to actually use the resulting software. Occasionally if you ask how to actually do a task in Slack someone will yell at you that you should have searched for a specific, obscurely named document in Google Drive, Confluence, or Github. We've tried a bunch of search tools which successfully surface the million product documents, design documents, PM reports, planning docs, retro docs and standup and oncall notes related to any feature, none of which are up to date."
"I'm convinced that documentation, even for large companies, should just be an Obsidian vault of markdown files maintained via git which is just rendered on the web either using a simple static site generator or using Obsidian Publish. When I brought this up at my last company it got dismissed as being 'too technical'.I know git can be tricky but it cannot be that difficult to teach people from non technical departments add, commit and push and then show maybe one person from each department how to solve conflicts. Alternatively, build non technical people a web interface for editing and committing but allow the devs to just use git as standard. Or there's Obsidian's built in sync but I don't know enough about it to know if it scales well in large organisations.What absolutely is definitely not the solution is Confluence. I have not met anyone who has a positive thing to say about it. The only reason it is being so widely used is because it satisfies whoever is in charge of the finances because it comes bundled with Bitbucket and Jira."
"Moving to America from France, one of my biggest surprises was how poor the average engineer (person really, but engineers affect me directly at work) is at summarizing concepts clearly.I learned a little later that "summary" exercises are not a thing taught in school here, which surprised me. In France, "le résumé" is an exercise that they constantly drill into students (particularly technical ones), in which you take a 3 page paper and condense it into 100 words. I really hated doing it back in the day, but as an adult I now am very grateful I did and wished other countries made this more prevalent."
"Hi HN, I'm Alex, at Terrastruct, where we've been making D2. This actually popped up on HN a couple months back, though it wasn't ready, e.g. not open source yet. It is now!We also put up a site for you to compare D2 with MermaidJS, Graphviz, and PlantUML: https://text-to-diagram.com.Full disclosure, we're a for-profit company. The open-core part is that we make an alternative layout engine which we sell (Jetbrains model, i.e. your copy is your's forever if you've paid for 12+ months). It's not packaged with D2, so you won't see it if you don't want it. D2 is perfectly usable without it, and integrates with multiple free open source layout engines (e.g. the one that Mermaid uses, "dagre", is D2's default). If you want to read more about our plans for D2: https://d2lang.com/tour/future.Hope you can check it out! It's got an easy install (and uninstall) process."
"I've been using PlantUML and Mermaid for my own diagrams.Mermaid is quite basic; it lacks functionalities that for me were necessary; for example, direct connection between attributes of different classes.All in all, PlantUML seems superior to Mermaid, at no cost (both languages are relatively simple to learn).Mermaid is supported by Github, which may be a necessary requirement for some. On the other hand, among the many functionalities, PlantUML's JSON is unusually good looking out of the box, and if one required the diagram it outputs, it's a great feature, because it requires no syntax knowledge.Both PlantUML and Mermaid mostly produce (Mermaid more) ugly-looking diagrams (dated, to say the least). In PlantUML, this problem is compounded by the explicit lack of layout, by design.Other warts: both (PlantUML and Mermaid) languages have limited comment support - they are only supported in specific locations of the diagram declarations.D2 could be a very welcome "next gen" diagramming language. However, the devil is in the details - text-to-diagram.com show a very basic functionality, so one must carefully check the requirements.Regarding text-to-diagram.com:- there's a mistake - PlantUML does support rich text (although "rich" is a fuzzy definition)- class diagrams are an important use case, which is currently missingEDIT: clarification about the comment limitations."
"I recently discovered Pikchr from Sqlite/Fossil [0] if anyone is looking for something in a similar vein, although it has it's own scripting/layout language rather than using go (a plus IMO, as it's not tied to a specific ecosystem).It's fantastic for making diagrams when you want more control over the layout than graphviz or mermaidjs but are looking for a similar type of tool. It's also clean C so easy to embed, and there's a WASM build for browser use.It is fairly simple in any general purpose language to output pikchr code - I've done this previously for producing autogenerated packet diagrams in documentation.0: https://pikchr.org/home/doc/trunk/homepage.md"
"For a few months now I saw a huge improvement on Linux regarding the memory management of Firefox. Previously I had to run Firefox in a separate cgroup to limit its memory usage, because it could easily deplete my whole RAM. And if I did close most of my tabs it did not release the memory back to the system. Now I never actually get to the limit I've set before and also with Auto Tab Discard extension it is well managed. So kudos to the team for such improvements."
"Firefox stability is funny ... I was at Mozilla for 10+ years and used Nightly as my daily driver on my work Mac. I don't think I got more than one or two dozen crashes in total. A crash was a special occasion and I would walk to someone's desk to show an explore. It barely every happened. On Nightly. So much love for all the stability work that is happening."
"I can't remember the last time Firefox crashed and I've used it daily on Windows since ... the beginning. Are most issues related to stability more common to Linux/MacOS?"
"I'm always fascinated by Dwarf Fortress whenever I cross paths with it, particularly from a technical architecture point of view. How did they architect the history simulation? How do they efficiently update everything on each tick? What is the game loop like?If anyone has any resources or links to articles, either definitive from the DW developer, or conjecture based on exploration and research, I'd love to learn more about how DF works."
"Not sure why Dwarf Fortress is on the front page, but I will always upvote.If you're intimidated by ASCII visuals, consider wishlisting the graphics release, scheduled for Dec 6, 2022: https://store.steampowered.com/app/975370/Dwarf_Fortress/"
"Frontend UX developer perspective on DF: The ascii interface is actually fine and mostly a superficial complaint. The real problem is just how hard the many interfaces are to navigate and learn, and how unconventional their designs are. If I could pick one thing for the UI team to focus on: DF needs a “command palette” to help find/learn all of the game’s many functions."
"Github-style rebase-only PRs have revealed the best compromise between 'preserve history' and 'linear history' strategies:All PRs are rebased and merged in a linear history of merge commits that reference the PR#. If you intentionally crafted a logical series of commits, merge them as a series (ideally you've tested each commit independently), otherwise squash.If you want more detail about the development of the PR than the merge commit, aka the 'real history', then open up the PR and browse through Updates, which include commits that were force-pushed to the branch and also fast-forward commits that were appended to the branch. You also get discussion context and intermediate build statuses etc. To represent this convention within native git, maybe tag each Update with pr/123/update-N.The funny thing about this design is that it's actually more similar to the kernel development workflow (emailing crafted patches around until they are accepted) than BOTH of the typical hard-line stances taken by most people with a strong opinion about how to maintain git history (only merge/only rebase)."
"I want the 'merge' function completely deprecated. I simply don't trust it anymore.If there are no conflicts, you might as well rebase or cherry-pick. If there is any kind of conflict, you are making code changes in the merge commit itself to resolve it. Developer end up fixing additional issues in the merge commit instead of actual commits.If you use merge to sync two branches continously, you completely lose track of what changes were done on the branch and which where done on the mainline."
"I don't know how stupid this is on a scale from 1 to 10. I've created a wrapper [1] for git (called "shit", for "short git") that converts non-padded revisions to their padded counterpart.Examples:"shit show 14" gets converted to "git show 00000140""shit log 10..14" translates to "git log 00000100..00000140"[1]: https://github.com/zegl/extremely-linear/blob/main/shit"
"Paper: https://www.science.org/doi/10.1126/science.ade9097Code: https://github.com/facebookresearch/diplomacy_ciceroSite: https://ai.facebook.com/research/cicero/Expert player vs. Cicero AI: https://www.youtube.com/watch?v=u5192bvUS7kRFP: https://ai.facebook.com/research/request-for-proposal/toward...The most interesting anecdote I heard from the team: "during the tournament dozens of human players never even suspected they were playing against a bot even though we played dozens of games online.""
"I would love to see this kind of thing applied to an RPG.Randomly generate a city full of people. Make a few dozen of them the important NPCs. Give them situations and goals, problems they need to solve and potential ways to solve them. Certain NPC's goals are opposite others'. Then drop the player into that world and have the 'quests' the player is performing be generated based on the NPCs needing their help.Updates wouldn't be adding new hand-written stories, it would be adding more complexity, more goals, more problems, more things that can be, and the story would generate itself.Done right, this would be incredible."
"https://www.science.org/doi/10.1126/science.ade9097Abstract: Despite much progress in training AI systems to imitate human language, building agents that use language to communicate intentionally with humans in interactive environments remains a major challenge. We introduce CICERO, the first AI agent to achieve human-level performance in Diplomacy, a strategy game involving both cooperation and competition that emphasizes natural language negotiation and tactical coordination between seven players. CICERO integrates a language model with planning and reinforcement learn- ing algorithms by inferring players’ beliefs and intentions from its conversations and generating dialogue in pursuit of its plans. Across 40 games of an anonymous online Diplomacy league, CICERO achieved more than double the average score of the human players and ranked in the top 10% of participants who played more than one game."
"In theory this is meant to be one of the advantages of end to end encryption: no more "accidental" leakage of user data between users, leakage in logs, etc (remember Facebook's logging incident? [0]). as it's only available on end user devices. And if you look at Apple's documentation [1], they say that iCloud is end to end encrypted. This is obviously not accurate as Apple keeps decryption keys for themselves. But this issue is even worse: here, the end to end encryption was circumvented in such a bad way that this bug could surface.[0]: https://krebsonsecurity.com/2019/03/facebook-stored-hundreds...[1]: https://support.apple.com/en-us/HT202303"
"This happened to me during a Google Takeout export when I was degoogling in late 2019. I recall going through some photos from the earlier 2010's and some random pictures of other people were popping up. About a month or so later I received an email from Google letting me know that some of my files may have been accidentally in other people's exports. Since then, I stopped using apps like Google Photos and cloud storage in general. If I do, my files will be encrypted before I upload them.Here's the original story: https://9to5google.com/2020/02/03/google-photos-video-strang..."
"This should be a showstopper, critical issue. I’m surprised to see this still be in the wild after being posted on Friday of last week."
"> With the tail-call approach, each bytecode now gets its own function, and the pathological case for the C/C++ compiler is gone. And as shown by the experience of the Google protobuf developers, the tail-call approach can indeed be used to build very good interpreters. But can it push to the limit of hand-written assembly interpreters? Unfortunately, the answer is still no, at least at its current state.> The main blockade to the tail-call approach is the callee-saved registers. Since each bytecode function is still a function, it is required to abide to the calling convention, specifically, every callee-saved register must retain its old value at function exit.This is correct, wasting of callee-saved registers is a shortcoming of the approach I published about protobuf parsing (linked from the first paragraph above). More recently I have been experimenting with a new calling convention that uses no callee-saved registers to work around this, but the results so far are inconclusive. The new calling convention would use all registers for arguments, but allocate registers in the opposite order of normal functions, to reduce the chance of overlap. I have been calling this calling convention "reverse_cc".I need to spend some time reading this article in more detail, to more fully understand this new work. I would like to know if a new calling convention in Clang would have the same performance benefits, or if Deegen is able to perform optimizations that go beyond this. Inline caching seems like a higher-level technique that operates above the level of individual opcode dispatch, and therefore somewhat orthogonal."
"I’ve been working on an early design of a high-performance dynamic binary translator that cannot JIT, and have reached a very similar conclusion as the author. We have an existing threaded interpreter but it’s a mess of hard-to-maintain assembly for two architectures, and we run into funny issues all the time where the two diverge. Plus, being handwritten by people who are not scheduling experts, there is probably some performance left on the table because of our poor choices and the design making it difficult to write complex-but-more-performant code. Nobody wants to write an efficient hash for TLB lookups in a software MMU using GAS macros.The core point I’ve identified is that existing compilers are pretty good at converting high level descriptions of operations into architecture-specific code (at least, better than we are given the amount of instructions we have to implement) but absolutely awful at doing register selection or dealing with open control flow that is important for an interpreter. Writing everything in assembly lets you do these two but you miss out on all the nice processor stuff that LLVM has encoded into Tablegen.Anyways, the current plan is that we’re going to generate LLVM IR for each case and run it through a custom calling convention to take that load off the compiler, similar to what the author did here. There’s a lot more than I’m handwaving over that’s still going to be work, like whether we can automate the process of translating the semantics for each instruction into code, how we plan to pin registers, and how we plan to perform further optimizations on top of what the compiler spits out, but I think this is going to be the new way that people write interpreters. Nobody needs another bespoke macro assembler for every interpreter :)"
"I work on a game, that mostly uses lua as logic code, saddling on a c/c++ engine. One of the engine developers implemented lua jit years ago and found that for the actual performance, the interpreter/jit was neglibable, the most expensive thing being the actual switch between luacode and c-code, which is with a large API a constant ugly background performance loss.The lua code by itself did not ran long enough to actually profit much from optimizations much.So, back then we did discuss possible solutions, and one idea was to basically notice a upcoming C-Call in bytecode ahead of execution, detect the stability of the arguments ahead of time. A background thread, extracts the values, perform the Call-processing of arguments and return pushes the values onto the stack, finally setting a "valid" bit to unblock the c-call (which by then actually is no longer a call). Both sides never have a complete cache eviction and live happily ever after. Unfortunatly i have a game dev addiction, so nothing ever came of it.But similar minds might have pushed similar ideas.. so asking the hive for this jive. Anyone ever seen something similar in the wild?"
"Amazingly brilliant work, especially given the CPU capabilities at the time. Carmack's use of BSP trees inspired my own work on the Crash Bandicoot renderer. I was also really intrigued by Seth Teller's Ph.D. thesis on Precomputed Visibility Sets though I knew that would never run on home console hardware.None of these techniques is relevant anymore given that all the hardware has Z buffers, obviating the need to explicitly order the polygons during the rendering process. But at the time (mid 90s) it was arguably the key problem 3D game developers needed to solve. (The other was camera control; for Crash Andy Gavin did that.)A key insight is that sorting polygons correctly is inherently O(N^2), not O(N lg N) as most would initially assume. This is because polygon overlap is not a transitive property (A in front of B and B in front of C does NOT imply A in front of C, due to cyclic overlap.) This means you can't use O(N lg N) sorting, which in turn means sorting 1000 polygons requires a million comparisons -- infeasible for hardware at the time.This is why many games from that era (3DO, PS1, etc) suffer from polygons that flicker back and forth, in front of and behind each other: most games used bucket sorting, which is O(N) but only approximate, and not stable frame to frame.The handful of games that did something more clever to enable correct polygon sorting (Doom, Crash and I'm sure a few others) looked much better as a result.Finally, just to screw with other developers, I generated a giant file of random data to fill up the Crash 1 CD and labeled it "bsptree.dat". I feel a bit guilty about that given that everyone has to download it when installing the game from the internet, even though it is completely useless to the game."
"BSP (binary space partitioning) was a well known algorithm, not something Carmack picked out of obscurity. It is well covered in every edition of Foley and van Dam's bible "Computer Graphics". The arcade game "I, Robot" from Atari (1983) used BSP to render the polygons back to front -- there was no z-buffer.That isn't to deny that Carmack was brilliant. But him using BSP isn't some masterstroke of genius in itself."
"I was studying computer graphics at the time. The book we used was "Computer Graphics Principles and Practice" I don't recall which edition. BSP trees are covered in the book, and like the article says, had been written about more than a decade prior.What Carmack had was the ability to read research papers and translate them into working code. I can do that too, but it does seem to be a less common skill in the software world."
"How come these map projects still use raster tiles? Are there no open source map projects that render the OSM data to vector data, and renders that vector data on the clients GPU? Maybe raster tiles are better at something I'm missing, but vector maps are easier to style[0], localize, they're sharper, easier to rotate and zoom smoothly. Maybe it's harder than I think to render it on all sorts of clients including mobile?When writing this I found out that MapTiler[1] is maintaining MapLibre GL JS[2], a fork of a Mapbox project to do just that. It would be interesting to see the difference between self hosting raster and vector maps and compare pros and cons. You can even render raster tiles from vector tiles on the server if the client needs it[3].[0] https://openmaptiles.org/docs/style/mapbox-gl-style-spec/[1] https://www.maptiler.com/open-source/[2] https://github.com/MapLibre/maplibre-gl-js[3] https://openmaptiles.org/docs/host/tileserver-gl/"
"Vector map of the whole Earth in PMTiles format is only ~65GB[1] and doesn't need any server or database - it's just a static file which you can host wherever you want.bdon (author of PMTiles) already commented on this thread. I recommend taking a look at https://protomaps.com/docs - compared to this, raster tile servers sound like ancient technology.[1]: https://app.protomaps.com/store/planet-z14"
"Google maps with its massive places data set is just too good. It’s expensive but I find it is always consistently up to date and mostly reliable. Have found some bugs with the places api which I reported, such as it not working for some queries which are off by 0.001 lat/lng"
"I'm not sure what the intent is, and what other people do with it, but I thoroughly _love_ having these named colours for when I'm hacking and prototyping. I can just start typing out a colour and autocomplete will show me a list. But I don't find myself ever using them in production.Also: Finally, a tool to help me decide between greenyellow and yellowgreen."
"In case the named web colors aren't enough, we're making excellent progress naming every color in the RGB space.https://colornames.org/"
"Totally irrational, of course, and now set in stone for backwards compatibility. https://arstechnica.com/information-technology/2015/10/tomat...CSS Color Module Level 4 (draft) admits as much, and states "their use is not encouraged." https://www.w3.org/TR/css-color-4/#named-colors"
"Technical write up by the security researcher at https://emily.id.au/tailscaleps. she's looking an employer rn // hire her!"
"> In theory, there is no path for a malicious Tailscale control plane to remotely execute code on your machine, unless you happen to run network services that are designed to allow it, like an SSH server with Tailscale-backed authentication.Now I feel less crazy for not using Tailscale SSH for similar reasons.I'd like to see a security evaluation of Tailscale, on a per feature basis.I'd like to see tailscaled run with far fewer privileges.Is there a Tailscale alternative that just does Wireguard + NAT traversal and doesn't try to do key management?"
"Do they have enough logs to reach out to people that were affected? As far as vulnerabilities go, this set is one is one of the worst ones I've seen this decade, and they seem rather straightforward.Would be nice to get a blog post from them that goes a bit into impact, not just a report that tells you to update. It's nice that they responded quickly, but I feel like this shouldn't have happened in the first place for a network security company and it makes the Windows client feel like a bit of an afterthought. Looks like they have a PR open to switch it to named pipes, I hope that is properly reviewed by someone that knows Windows APIs before it's merged."
"Blockchain was invented to solve one particular problem: distributed consensus on a sequence of transactions, where the choice of which transaction to include from a set of conflicting transactions is irrelevant.The latter property here is key to understanding where blockchain is useful. It was created to solve the "double spend problem", ie. two transitions that spend the same coin but send it to different recipients (and so they conflict and cannot both be included in the canonical list of transactions). A double spend is the result of the sender either (a) making a mistake, or (b) attempting fraud. In both cases the important property is that as long as only a single of these conflicting transactions is included, the systems works.Only if your problem exhibits the above property (and it's a distributed system) does using a blockchain make sense."
"I had a real experience that perfectly reflects the research presented here (safer ledgers useful, blockchains are not).I accidentally got wrapped up in a project to automate some HR functions, and the product manager demanded that it must be blockchain because blockchains are the future.It turns out that append-only databases are well-suited for HR records, and (especially when dealing with things like background checks, immigration papers, etc.) it doesn't hurt to have a history with cryptographically-verifiable date stamps.We used an existing database that did all of the above, told everyone it was blockchain, and released the product.That was a great strategy for a few years until everyone realized blockchain was a boondoggle, and now you never need to work with anyone who still believes in it. You can just understand their mention of blockchain to be a sign that you need to avoid doing business with them."
"> [Andy Jassey] said something like this: “All these leaders [CIOs and CTOs of huge enterprises] are asking me what our blockchain strategy is. They tell me that everyone’s saying it’s the future, the platform that’s going to obsolete everything else. I need to have a good answer for them. I’ll be honest, when they explain why it’s wonderful I just don’t get it. You guys got to go figure it out for us.”To me the tell is not just that Andy didn't understand.It's that all of these leaders said "everyone says it's the future", but not one of them said "I have this problem and here's how blockchain solves it for me.""
"Because corporations are doing the majority of that pro-environmental advertising. I mean that both in terms of companies making changes (both real and greenwashing) and the News/Media corporations reporting on it.Telecommuting could be absolutely massive for reduced emissions, could bring down urban house prices, improve inter-family relationships, and revitalized suburban neighborhoods (e.g. more walkable areas). Plus increase wealth to relatively poor rural areas.Even some corporations are starting to realize that telecommuting isn't their enemy, but large ships move slowly, and recently we've been seeing a lot of "return to work" used as a way to conduct layoffs with lower negative PR/stock tanking. This isn't a byproduct but a goal of return-to-work (e.g. see Musk's text message conversation during Twitter-lawsuit discovery)."
"There is nothing more illogical in modern society than commuting to an office every day. Employees waste 2 of their 16 available waking hours in the non-productive commute while incurring significant financial costs (lease/insurance/fuel/energy) in order to support this patently absurd activity. Employers waste time and energy negotiating leases, re-arranging offices, purchasing AV equipment for meeting rooms, etc., in addition to paying the likely enormously expensive lease itself. The impacts on the environment, the number of hours of human life wasted in commute, the pointless buildings and associated costs to employers as well as the public infrastructure to support it (roads, trains, busses, etc.) are all incredibly wasteful. Surely, all of this could only be justified if physical presence had a dramatic impact on productivity. Yet, we cannot tell one way or the other if it actually improves outcomes."
"Two main problems1. Middle and senior management who don't want to lose control or be rendered less effective. 2. Engineers who are not trained in written communication and largely cannot autonomously move a group towards a goal without a lot of supervision.If you solve for no 2, then that acts counter to no 1 - because middle management will be questioned - why do we need you ? If a group of engineers can function on their own towards a common goal, then the manager's role is more or less rendered redundant. Sure there may be a need for psychological support but you surely won't need the current ratio of engineers: managers.There is a deep rooted old school interest in staying physically connected. This won't go away anytime soon. I am not debating whether that is right or wrong, but the general notion that 'we are better if we are physically together' still persists. I don't know if this is a genuine feel-good-together feeling or just a made up emotion to mask point no 1 above.I am flummoxed by how executive leadership is simply blind to these facts in most companies. I mean the CEO can declare a fully remote constraint sort of like the exact opposite of what Musk did at Twitter and drive productivity higher. The cynic in me says execs can't force this decision because the senior management simply will come back and say 'we cannot be this productive with a fully remote team anymore'. I don't know but I for one cannot understand the irrational exuberance behind RTO."
"One of my professors in grad school was really into wine and every couple of years he would put on an after-hours wine tasting class for a semester. One of the points he made was that there are absolutely wines which are objectively better and worse and that experts can reliably tell them apart. He had met enough experts who could identify a vineyard and vintage blind to know there was something to it. But sitting on top of that there is a frothy market that is driven by fads, speculation, and hype.He was of the opinion that generally speaking the quality of a typical wine increases monotonically with price up until around the $40 range with the big steps around the $5, $10, and $20 price points. But above $50 or so, you're no longer paying for higher quality, per se. It's more that you are paying for a unique flavor profile and reliability. But unless you're seeking out that particular flavor profile, you can get a bottle that is just as good for $30-40 (and occasionally even cheaper). And above a few hundred dollars it's all just fads, speculation, and hype. (He liked to say that the people who buy those wines have "more money than sense.") They're good wines, but you can get a bottle that is just as good for a fraction of the price."
"I worked at WineSpectator.com in 2012-2013. I'll say this in their favor: the wine tastings were blind. A bunch of interns would set up the wine tasting, pouring the wine into glasses and then hiding the bottles. Only after everything was setup were the editors allowed into the room. So when the editors drank the wine, they had no idea if they were drinking a $9 bottle or a $900 bottle. They had to focus on the taste and balance, and write their report. Only afterwards were they told which wine they had tasted.Having said that, I'll also mention, the way the editors struggled for new adjectives did sometimes make me laugh:"a vast, hearty body, notes of blue and a hint of graphite steel""a radiance similar to the sun at dawn, a strong body, notes of orange""
"Two scientific points to bring up about this article:1. When the author talks about Coke vs. Pepsi, he comes to the conclusion that the reason people prefer Pepsi in blind taste tests but Coke in unblinded ones is "Think of it as the brain combining two sources of input to make a final taste perception: the actual taste of the two sodas and a preconceived notion (probably based on great marketing) that Coke should taste better." I've read elsewhere, though, that the reason for this difference is actually because of the difference between "sip tests" vs. "drinking tests". Pepsi is objectively sweeter than Coke, so if you're just taking a few quick sips (as most taste tests are set up), you may prefer the sweeter taste of Pepsi because it stands out more, but if you're drinking a whole bunch, the sweeter taste of Pepsi can feel cloying.2. The article includes this quote, "the correlation between price and overall rating is small and negative, suggesting that individuals on average enjoy more expensive wines slightly less." I wonder if this could be due to Berkon's paradox, a statistical paradox that was on the HN front page yesterday, https://news.ycombinator.com/item?id=33677781. After all, I'm guessing most truly bad wines may not be rated at all."
"My wife and I use Briar for household communication because of subsidiarity rather than any direct privacy concerns. Out of all the messenger projects that we've tried, Briar actually works for local communication. It's actually instant messaging without any client hiccups or latency (looking at you, Signal). We've tried a ton of other options, but we keep ending up back at Briar.There are points of UX friction in the name of good opsec that are inconvenient but totally understandable given the project goals. The big ones being that you have to manually login after any reboots and notifications are intentionally sparse, so good luck using a smartwatch for reading or replying. Otherwise, the forums and blogs are great for managing household projects, IM is a dream, and as a bonus, anyone willing to install and use it probably has a large enough values overlap that we can use it as a social pre-filter for close friends.The only other option that has come even remotely close to being as functional as Briar is DeltaChat. The only issue that stops us from using DeltaChat (or email in general) is that we both have email hosting in Europe while we live in the US, so neither of us, being frugal in principal, wants to send information to Europe and back in order to tell the person 100 ft away to come help bring in the groceries."
"Just wanna say this app saved my family and I when we went on a cruise. Normally, we would had had to pay to use the chat service via the ship's paid-only Wi-Fi (since we get no phone reception on the seas). Without needing to pay for the Wi-Fi, we were all able to use Briar to communicate whilst connected to the network, which made coordinating and finding each other on the ship way easier. It was great and worked really well. So thanks, Briar!"
"Related:Briar Project – Secure messaging, everywhere - https://news.ycombinator.com/item?id=33412171 - Oct 2022 (7 comments)Briar has been removed from Google Play - https://news.ycombinator.com/item?id=30498924 - Feb 2022 (85 comments)Briar Desktop for Linux - https://news.ycombinator.com/item?id=30023169 - Jan 2022 (84 comments)Briar 1.4 – Offline sharing, message transfer via SD cards and USB sticks - https://news.ycombinator.com/item?id=29227754 - Nov 2021 (110 comments)Secure Messaging, Anywhere - https://news.ycombinator.com/item?id=27649123 - June 2021 (63 comments)Briar Project - https://news.ycombinator.com/item?id=24031885 - Aug 2020 (185 comments)Briar and Bramble: A Vision for Decentralized Infrastructure - https://news.ycombinator.com/item?id=18027949 - Sept 2018 (11 comments)Briar Project - https://news.ycombinator.com/item?id=17888920 - Aug 2018 (10 comments)Briar: Peer-to-peer encrypted messaging and forums - https://news.ycombinator.com/item?id=16948438 - April 2018 (1 comment)Darknet Messenger Briar Releases Beta, Passes Security Audit - https://news.ycombinator.com/item?id=14825019 - July 2017 (85 comments)"
"http://archive.today/jW1CZ"
"I don't know if I have ADHD, but if I do an online survey it says I most probably definitely do. I was never diagnosed as a child because I largely functioned as a kid and was quiet and non-disruptive, but looking back the signs were all there.Fast forward as an adult I have a number of coping mechanisms and one of them is to have something on in the background. I have never associated the effectiveness with the noise itself, but rather with something that is keeping part of my brain quiet. It prevents my mind from wandering. It is ideally something I already know. Like a show I have seen before or a podcast that I am okay not fully retaining. Not enough stimulation and I get distracted easily, too much stimulation and I shut down completely. Music doesn't usually work for me."
"I have ADHD and I straight-up don't do silence. Between restless brain and tinnitus, actual silence, or anything with a noise floor below ~15dB, just doesn't sit right with me. 20-30dB is about ideal. Especially when trying to sleep. I have some sort of fan/filter in just about every room. When I'm working, I'm almost always listening to music (unless I'm way over stimulated).It probably started when I was much younger with much worse asthma and always had a HEPA filter running in the background. Eventually this turned into basically always having some kind of fan in my primary locations. Right now I dig the Coway air filters on low or medium.Even beyond the alleged "noise floor dopamine boost", I find some kind of background whoosh really nice for masking otherwise variable sounds, such as cars, airplanes, and the wind, which are far more distracting.10/10 would recommend running some sort of air filter all the time. Plus, cleaner air (air pollution has all kinds of bad effects)."
"It doesn't just run Minecraft, it now runs a smooth gpu accelerated GNOME desktop including things like Youtube videos: https://cdn.masto.host/sigmoidsocial/media_attachments/files...This doesn't yet work out of the box but the next few months will be very exciting."
"These open-source GPU driver guys are sick! About two decades ago, I had a Via Unichrome integrated GPU and the OpenChrome project had me able to run games on Linux. It was sick. I played Unreal Tournament (which had a Linux version that worked better for me than on Windows), and I think at one point my introduction to open source was having to modify another game's source code so that it would allocate less GPU memory for a texture (the texture then kind of smeared over the rest of the screen, but it was for a UI element so the game was still playable).Love to see there are people still doing that stuff today, especially since this stuff is probably more complex than then."
"Already using Asahi Linux on my M1 Air; can't wait until the GPU stuff lands!Extremely impressive work by all involved!"
"It's a modern Web Audio implementation of this Mac software from 1986 (last updated in 2004).Music Mouse - An Intelligent Instrument - https://web.archive.org/web/20220629172536fw_/http://retiary... (Archived because the original site is quite slow: http://retiary.org/ls/programs.html)It was written by Laurie Spiegel, a composer and early pioneer in electronic music.> Music Mouse is an algorithmic musical composition software developed by Laurie Spiegel. The "intelligent instrument" name refers to the program's built-in knowledge of chord and scale convention and stylistic constraints. Automating these processes allows the user to focus on other aspects of the music in real time.> In addition to improvisations using this software, Spiegel composed several works for "Music Mouse", including Cavis muris in 1986, Three Sonic Spaces in 1989, and Sound Zones in 1990. She continued to update the program through Macintosh OS 9 and, as of 2021, it remained available for purchase or demo download from her website.https://en.wikipedia.org/wiki/Laurie_SpiegelShe was featured in the documentary, Sisters with Transistors. https://sisterswithtransistors.com/"
"Wow, this is probably the most intuitively enjoyable music tool I've ever used.I'm not a musician and I know very little about what makes music good, but playing with this tool felt like I was hearing a better version of my own imagination. Like those scenes in movies where people can suddenly play music and have no idea how they're doing it."
"By total chance, this revealed a flaw with my mouse that's been haunting me for months.My right mouse button intermittently doesn't bring up context menus, which I seemingly confirmed by recording the screen and visualizing click events in a presentation mode. It would show what looked like one right click, but no context menu, or a context menu that appeared and was immediately dismissed.But this revealed that it's actually rapidly sending multiple logical clicks per physical click. The logical clicks are fast enough that the screen recordings didn't differentiate them as separate events - which sent me in the wrong direction, making me think the OS was disregarding clicks. But here, the multiple clicks are clearly audible.So thanks!"
"From this operations engineer's perspective, there are only 3 main things that bring a site down: new code, disk space, and 'outages'. If you don't push new code, your apps will be pretty stable. If you don't run out of disk space, your apps will keep running. And if your network/power/etc doesn't mysteriously disappear, your apps will keep running. And running, and running, and running.The biggest thing that brings down a site is changes. Typically code changes, but also schema/data changes, infra/network/config changes, etc. As long as nothing changes, and you don't run out of disk space (from logs for example), things stay working pretty much just fine. The trick is to design it to be as immutable and simple as possible.There are other things that can bring a site down, like security issues, or bugs triggered by unusual states, too much traffic, etc. But generally speaking those things are rare and don't bring down an entire site.The last thing off the top of my head that will absolutely bring a site down over time, is expired certs. If, for any reason at all, a cert fails to be regenerated (say, your etcd certs, or some weird one-off tool underpinning everything that somebody has to remember to regen every 360 days), they will expire, and it will be a very fun day at the office. Over a long enough period of time, your web server's TLS version will be obsoleted in new browser versions, and nobody will be able to load it."
"> This left a lot wondering what exactly was going on with all those engineers and made it seem like it was all just bloat.I was partly expecting the rest of the article to explain to me why exactly it wasn't just bloat. But it goes on talking about this 1~3-person cache SRE team that built solid infra automation that's really resilient to both hardware and software failures. If anything, the article might actually persuade me that it was all bloat."
"I did SRE consulting work for a phase of my career... as the author points out, these systems are scaled out and resilient, but what happens next is entropy. Team sizes shrink, everything starts to be viewed through a cost cutting / savings lens, overtaxed staff start ignoring problems or the long-term view because they are in firefighting mode, it becomes hard to attract new talent because the perception is "the good times are over." Things start to become brittle and/or not get the attention needed, contractors are brought in because they are cheaper and/or bits get outsourced to the cheapest bidder... the professional care and attention like the author clearly brought just starts to shift over time. Consultants like me are brought in to diagnose what's wrong - the good staff could write our briefs, they know what's going on - and generally we slap a band-aid on the problem because management just wants to squeeze whatever value they can out of the assets rather than actually improve anything."
"Damn, I was afraid this was going to be someone stealing my genius idea. Gladly it was just tangential to that idea.Idea: Train an ai to construct entire threads based on the input of the title and link. There should be plenty of training material, and frankly I think it would be hilariously similar to the real threads.When new submissions are added to HN, the system will fetch the title and link, and predict the entire thread. You can view the thread on something like hnpredicted.com/?id=33680661 . When the real thread has had no new comment added for five days, it is reintegrated into the system further training the model.Possibly, the model could be trained on the actual content of the link. But I suspect just using the short title might be better. Larger concentration on the actual material there, websites contain lots of text which is useless.Now that I've posted the idea, feel free to steal it ofcourse. It's documented that I came up with it anyway. :)"
"Unrealistic.Not a single person complained about link not working with JavaScript disabled."
"As a US poster, I need all units of measure to be what the Founding Fathers used and all pickup trucks to be what the Founding Fathers drove.As an EU poster, I won't understand the US obsession with large vehicles and I will recommend bicycling instead.As a programmer, I think we can improve performance if we multithread, containerise, microservice, and store n-depth JSON in a No-SQL database.As a Python programmer, this doesn't look Pythonic. We should be using pythonic Python, especially the new release that adds the syntactic sugar that we've all been waiting for."
"> Our diagnosis is that individual developers do not pay for tools.I know this first hand, building a developer tool startup and failing to reach any level of revenue. In the end, the tech was bought out by a larger company to recover a fraction of our VC investment.The challenge is that when you're building software for developers, they already know how it must work.It's like trying to sell magic tricks to magicians. Sell magic to regular people, and you'll see some significant revenue.I've used Kite before. It was ok. But I am a SWE. It's entirely possible that Kite would have seen major adoption if the push was towards non-technical folks trying to get their feet wet in software. Eg: Data scientists or business.The reason why BI tools sell so well at the moment is that you have tons of C-level execs that like the appeal of a business-optimizing tool requiring little to none of any actual software development.Let that be a lesson to everyone. You can't blow away developers. They're just too damn ~~smart~~ well-informed.Edit: Another anecdote: A buddy of mine built a bespoke OCR and document indexing/search tool. He has ~60 paying clients (almost exclusively law-firms and banks) that primarily work with printed pages on paper. No Saas. No free tier. The client data resides on an on-premise Windows box, avoiding issues with sensitive data in the cloud etc.He's a solo dev with support contracts and nets something like $1000/month from each client.For your average lawyer/paralegal, the ability to locate and reference a single page from thousands of pages in under a second is magic. So they pay for it wholeheartedly."
""Our diagnosis is that individual developers do not pay for tools. Their manager might, but engineering managers only want to pay for discrete new capabilities, i.e. making their developers 18% faster when writing code did not resonate strongly enough."I never used Kite, but I've tried Github Copilot twice, and found it marginal at best (and distracting at worst - which is why I turned it off both times). If Kite was similar, the reason I'm not paying is that coder AIs are not providing any value.Developers are somewhat reluctant to pay for tools but I think you can get them to pay for things that are worth it. I've been paying for code editors for years."
"“Our 500k developers would not pay to use it. Our diagnosis is that individual developers do not pay for tools.”I don’t like depending on something I could lose in a month or tethers me to the internet. I consider that more a service than a tool. I’d prefer to just buy something once that just works, but that business model might be dead too since people will pirate things that aren't tethered to some serverside component.I guess what I’m saying is that I want to buy tools, but people are only renting. Personally I’m largely holding out hope this becomes someone’s open source passion project and I can truly own my tools."
"I built a lighting system for <hotel chain you've heard of> to save energy by turning off hallway lights when not in use. The environmental aspect was great and saved hundreds of thousands in electricity. Someone eventually realized that the mesh network I built to connect all the lights together and report usage statistics could also be used to track employees moving throughout the building and catch them taking unauthorized breaks in the stairwell, so that's its main purpose now.I'm a lot more paranoid about privacy these days."
"When I was at Akamai about 5 years ago, I was involved in building the system for making their CDN compliant in China. There were two main features, and they were activated on all servers running inside mainland china (not HK, macau or Taiwan)1. Logs of the CDN were sent in real time to the ministry of technology -- there was about a 15 minute delay if I remember correctly, and they could impose fines if they were delayed. The log included the url visited, the IP address of the visitor, and a few other things. Perhaps the user agent? I forget.2. The ministry of technology had a special API to block URLs on the CDN. Basically, they provided a list of URLs that would return a 451, and of course those logs also went to the government.No other country had this kind of access at the time, but it was considered critical for the business to continue to operate in China. As I understand it, these are required to comply with chinese government regulations, and other CDNs like Cloudflare and Cloudfront have also built similar capabilities. Perhaps jgrahamc can comment on what cloudflare did?I feel quite guilty about being involved with that project, but the business was set on building it, so I did what I could to limit the blast radius. I would not be surprised if someone got arrested or was killed because of it."
"I worked in an Ad-tech start-up in Berlin run by two of the most evil f*kers I've ever encountered. I built out their principal ad auction algorithm and a lot of the back-end to support it, and all they did with it was target vulnerable groups of people at particular times of the week when they thought they were at their lowest ebb.One meeting in particular really stands out still, a social media giant that everyone knows was in town meeting the founders to sell additional personalization data. Before that meeting, I thought things the start-up were doing were a bit sketchy, maybe borderline unethical. During the meeting itself, it was more like sitting around a table with Dr. Evil and a few henchmen. They were actively, unambiguously picking vulnerable groups for ad re-targeting. And that's not even the worst of it, the meeting wraps up and one of the founders says "OK guys, let's go get some beers and bring some girls". Then this despicable excuse for a man promptly walked out into the office, points at a few female employees and says "You, you and you, come with us now"."
"You can overcome the RPi scarcity by migrating the code to the Teensy platform, which aside being cheaper and less power hungry than the 2,3,4 RPi, is a lot cheaper and more easily available. Not an easy task since there's no Linux under the hood, but there are some excellent audio/midi libraries to help. They already built commercial-level synthesizers with it. By combining the breath sensor data with other pressure sensors you could end up with a very expressive instrument.https://www.pjrc.com/teensy/index.htmlhttps://www.pjrc.com/teensy/td_libs_Audio.htmlhttps://www.pjrc.com/teensy/td_libs_MIDI.htmlhttps://www.youtube.com/watch?v=z2674LdYW5I"
"> This is a very niche projectrockets to #1 on Hacker News"
"As the owner of a (recently revived) ewi-usb… I’m really curious about the mouthpiece bit. What kind of analog sounds are happening while playing this? Do you notice that reed age, strength etc are as impactful as on a real saxophone? Do you feel a need to swap mouthpieces for different styles of playing?I love the idea and ingenuity to make this happen!"
"Hi all, I'm the author of https://devenv.sh, https://cachix.org and https://nix.dev.I've been part of the Nix community for more than 10 years and in the last 4 years focused on making it documented, simple and accessible for any developer.After building Cachix (where you can store any software binaries with a few steps) we realized that there needs to be an intuitive interface for crafting developer environments.I'm really looking forward what you build on top of devenv. We're only beginning to explore the area of what's possible, so please give as much feedback as you have."
"this, devbox, and others seem to be alternatives to `nix-env shell` or the flake-based `nix develop`. spurred i think by a desire for better UX. these are excellent for any project off-the-ground enough that you’ve run `git init` or created a repo.the adjacent area i’m struggling with is the “i want to write a tiny program to verify some conjecture, and i’ll probably throw it away an hour from now”. think codegolf competitions, if you can’t relate.the environments i create for these follow similar patterns. it’s either python with numpy, pandas and plotly, or it’s Rust with clap and serde, say. i’d love a tool where i can just `cd /tmp/my-hourlong-project` and then `devenv python` to get a python shell that’ll have everything i often use (and probably more).hearing from people who use these tools, nobody has told me that any of these can do this — except that since they crawl up the fs tree looking for their env definition maybe i could just stash some definitions at the fs root and get globally-invokable environments that way. seems hacky though: i’d hope for a method that’s more officially supported."
"This looks nice! I’m really enthusiastic about these nix based dev env systems. Recently saw devbox[0] here, tried it out and fell in love. It’s made me very interested in all things Nix!0 - https://news.ycombinator.com/item?id=32600821"
"Just to clarify, they are only banning the usage of the free offerings of Office and Workspace which do not provide the data governance / compliance features. The higher tiers of Workspace/Office provide this functionality."
"I think that this move isn't so much about privacy. I think that French government is beginning to realize that these types of products and services constitute critical IT infrastructure for the country. As such, these products can not be offered by a foreign country, no matter how friendly. I suspect the government offices will be next. I am not actually sure, what alternatives are out there for MS Office and Google Docs?"
"Nice to see someone taking privacy a little more seriously.The cloud has it's place but I've never been happy with the underhanded way that Office 365 "encourages" users to save to the cloud. When someone pays for one service, and is continuously pushed to use another (with additional downstream costs), I wonder isn't it time to pursue antitrust?"
"This post really starts to show the right direction:Z-Library - "the desktop app", with built in Tor (for seeders safety), IPFS (for p2p distribution), IPNS (to download updated indexes), with local search engine (no SPOF & convenience), optional at-rest file encryption, and some random pining algorithm to let users donate 1-10GB of local disk space to host random chunks of the library.It needs to be dead easy to let anyone use and contribute."
"Considering the goals of IPFS, I’m surprised it has trouble at this scale. I mean, it’s supposed to be “interplanetary” but this data would fit on two hard drives. The post talks about performance problems with advertising “hundreds” of content IDs. Is it a design problem or just an early implementation problem?"
"I'm fairly sure this isn't really safe or really reliable.To my knowledge, IPFS isn't really private, in that both the nodes hosting content can be easily known, and the users requesting content can be monitored. This is bad news for something law enforcement has already taken a serious interest in.IPFS also requires "pinning", which means that unless other people decide to dedicate a few TB to this out of their own initiative, what we have currently is a single machine providing data through an obscure mechanism. If this machine is taken down, the content goes with it.The amount of people that have 31 TB worth of spare storage, care about this particular issue, and are willing to get into legal trouble for it (or at least anger their ISP/host) is probably not terribly large. The work could be split up, but then there needs to be some sort of coordination to somehow divide up hosting the archive among a group of volunteers."
"I'm a nurse and I find the 7th point on the post especially relevant. I will add a disclaimer that I never worked in the ICU so I can't speak for what happens in that type of unit.There is a serious issue with the flow of information in healthcare, (or at least in the U.S, I never worked elsewhere to know if it's any different). But If you find something during your shift which will be important to know later on, it will certainly be lost as soon as you are off for a few days, or even as soon as a new nurse comes on. To think of a somewhat crude example, if you find out that it is much easier to obtain a blood sample from the veins on the left arm of a patients vs the right, many nurses will still stick the right arm countless times hoping to get something.And you can leave a chart note about things like that or speak about it during report, but for the most part few people will think "hm, I wonder what everybody else had to deal with." They are probably too busy handling a thousand different things happening all at once. And, even if that is not the case, from what I observed it's simply not part of how things are done. And very often patients will get (justifiably) angry, saying "I've been complaining of x thing for days!" or some version of that. I think it would be much better for both patients and healthcare staff alike if there was a greater emphasis placed on focusing on the series of successes and failures that happen over the course of someone's care, not just seeing it as a single shift or a single problem happening in some isolated point in time."
"I noticed a lot of the same things when my dad was in the ICU. Some additional thoughts:1. "Almost every patient has delusions and nightmares" I personally felt "off" when visiting my father. The sounds, smells, lights and constant buzz of activity all contributed to a feeling of being in a surreal dreamworld. Lack of sleep contributes. I can't imagine what my father experiencing.2. Food was HORRIBLE. One meal was a low quality hamburger on a plain, white bread bun with a slice of "american cheese", fries, iceberg lettuce salad with a couple of slices of cucumber and a single slice of tomato, a container of apple sauce and glass of milk. Lots of salad dressing and ketchup. They wouldn't let us bring better food into the ICU and my dad didn't want to "make waves".3. Family is critical. My father got better care because I, or my brother, was there to act on his behalf. Having obnoxious family members is worse than having none from what I saw."
"> There’s no sense of a scientific method, reasoning from first principles, or even reasoning from similar cases though. It’s all shooting in the dark, and most of the time I felt like I could have done just as good a job on these longterm issues...This articulates very well what I've usually felt when dealing with doctors. It's like the story of a programmer finding that his code outputs 5 when it should be 4, and then adding... if(return_value == 5): return_value = 4 ...to fix it, and being satisfied. What I want is something like in the television show House. The main character is unhinged and anti-social and takes extreme risks, but at least he demonstrates curiosity to really figure out and understand the root of what's going on. To be fair, I don't actually think that doctors lack curiosity or are incapable of doing this, the medical system as it's set up just doesn't allow it. For chronic issues, I've usually figured them out for myself, as a layperson, by persistently keeping track of things, searching the web, reading, and experimenting over months and years."
"Being accommodative of small minorities seems to be a kind, and thoughtful thing to do.However having large majorities(95%+ of the population) make concessions in order to accommodate a small group imposes a cost (however small) born by many to the benefit of a few. My questions is how do you decide where to make that trade off? What’s the cost benefit analysis? At some point where do people say the cost of accommodating this 1/100k/1m people by another 300m+ people is not a worthwhile use of societal resources?"
"I saw this article posted earlier and was going to comment but thought “you know what, this is obviously not HN material, I’ll just flag it”. 12 hours later, loads of upvotes and hundreds of comments. Fine.HN news is discussion about technology, start ups, and creating stupid moral panics to stir up hatred against groups you want to oppress.In this case I actually have first hand experience of this topic, my wife is currently pregnant. Not once have I seen any price of advice or information that in any way has been less useful because of inclusive language. This whole article is attacking a problem - using inclusive language - that only exists if you hate the people being included and don’t want them included. Which apart from being obviously morally repugnant is also something that seems like an odd discussion to have on a tech website. I guess we’re interested in ethics in game journalism again…"
"I think it's interesting that Lewis Carroll described this battle over language many years ago ‘When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’ ‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’ ‘The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.”"
"This article’s main point is that AI technology has economic potential. It goes about this by parodying common arguments against whether such AI can develop into goal-driven beings.One example of such criticism is this five year old piece: https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/ Few are skeptical about AI technologies’ economic impact. In that sense, the article misses the ball slightly, but it’s so funny that I don’t mind.To state my point in the vein of the article: The bulldozer did change our economy. It does most, if not all, of its productive work in cooperation with a human. Turning a bulldozer on and letting it run by itself is usually a waste, and may be dangerous."
"I feel we're talking too much about the article and AI and not enough about eagles throwing goats off mountains. Say what you will about drone warfare, at least you don't get much time to think about it."
"There's also bats who have much more flight control than birds. Since they have a hand in their wing which is covered with skin and muscle, they have a large amount of additional control over birds. Here's a great video of a study.https://youtu.be/BNNAxCuaYocAnd there's kestrels hovering. I have also seen hawks do this.https://youtu.be/7j6OsP7zL6whttps://youtu.be/mDRcLAkRZ50Edit: Now I've gotten on a YouTube binge and landed on New Zealand keas, which I didn't know about before. And damn are those birds intelligent."
"Half the comments here are talking about the vtuber herself. Who cares. It's been talked before. Just imagine if half the thread is discussing what gender she is. What I am interested in is the claims here https://asahilinux.org/2022/11/tales-of-the-m1-gpu/#rust-is-.... (what is it called if it comes with a proof?).The resident C/C++ experts here would have you believe that the same is possible in C/C++. Is that true?"
"watching a virtual persona stream their development of their M1 GPU drivers is one of the most cyberpunk things I've ever seen! it's easy to forget that this world is looking closer and closer to those dreamed up by Gibson, Stephenson, etc. what a time to be alive."
"The fact so much hardware these days is running a full real-time OS all the time annoys me. I know it is normal and understandable but everything is such a black box and it has already caused headaches (looking at you, Intel)."