GNU recutils – human readable, plain text databases | Hacker News — https://news.ycombinator.com/item?id=31832564
#recutils #gnu #database #text #hackernews
2 Likes
GNU recutils – human readable, plain text databases | Hacker News — https://news.ycombinator.com/item?id=31832564
#recutils #gnu #database #text #hackernews
"Before hiring the threat actor, KnowBe4 performed background checks, verified the provided references, and conducted four video interviews to ensure they were a real person and that his face matched the one on his CV.
However, it was later determined that the person had submitted a U.S. person's stolen identity to dodge the preliminary checks, and also used AI tools to create a profile picture and match that face during the video conference calls.
KnowBe4, which specializes in security awareness training and phishing simulations, suspected something was off on July 15, 2024, when its EDR product reported an attempt to load malware from the Mac workstation that had just been sent to the new hire."
On many notebooks it is possible to install several storage devices.
One way is to use an mSata drive in the wwan slot, if it is not needed for a modem (or if a modem is needed, it's possible to use an external usb modem instead).
It might be also possible to use a wifi card M.2 slot for storage: With an adapter NVMe drives can be connected.
And for machines with an express card slot, there are express card to NVMe adapters available.
So it's possible to have a storage device in the wwan slot, in the wifi slot and/or in the express card slot.
There are also dual slot adapters available, which make it possible to install two M.2 NVMe storages (instead of having one SSD).
Besides these solutions there are probably even more ways to install more storage devices on (older) notebooks.
Does anyone know if there is a way to use one slot for two 2,5" SATA drives (via adapter)? Did not see such a setup yet.
#hardware #linux #storage #hackernews #nvme #msata #sata #gnulinux
Just made a test using Blacklight. Blacklight is a Real-Time Website Privacy Inspector.
Tested: expressvpn.com
ExpressVPN belongs to Kape Technologies, a UK and Israel based digital privacy and security company (ExpressVPN was acquired 2021).
This VPN service makes marketing with slogans like "Just one click to a safer internet - Going online doesn’t have to mean being exposed. Whether you’re shopping from your desk or just connecting at a cafe, keep your personal information more private and secure."
It turned out to be much worse than expected... Personal conclusion: Such a service is not recommended
Blacklight Inspection Result
6 Ad trackers found on this site.
Blacklight detected trackers on this page sending data to companies involved in online advertising.
Blacklight detected scripts belonging to the companies Facebook, Inc., Microsoft Corporation and Alphabet, Inc
3 Third-party cookies were found.
These are commonly used by advertising tracking companies to profile you based on your internet usage. Blacklight detected cookies set for Alphabet, Inc. and Microsoft Corporation.
The Facebook pixel is a snippet of code that sends data back to Facebook about people who visit this site and allows the site operator to later target them with ads on Facebook.
A Facebook spokesperson told The Markup that the company set up this system so that a user doesn’t have to be “simultaneously logged into Facebook and viewing a third-party website for our business tools to function.”
Common actions that can be tracked via pixel include viewing a page or specific content, adding payment information, or making a purchase.
This site uses Google Analytics and seems to use its ”remarketing audiences” feature that enables user tracking for targeted advertising across the internet.
This feature allows a website to build custom audiences based on how a user interacts with this particular site and then follow those users across the internet and target them with advertising on other sites using Google Ads and Display & Video 360.
A Google spokesperson told The Markup that site operators are supposed to inform visitors when data collected with this feature is used to connect this browsing data with someone’s real-world identity. You know when those shoes you were looking at follow you around the internet? This is one of the trackers leading to that.
Some of the ad-tech companies this website interacted with:
The inspected website contacted some well known actors in the ad-tech industry. Not all of these loaded trackers, so they may be different from those listed in the tests section above. For more information on each company, what it does, and which of its domains Blacklight found during the inspection, click the arrow. Reading this can give you a better idea of how the ad-tech industry works.
Alphabet
Blacklight detected this website sending user data to Alphabet, the technology conglomerate that encompasses Google and associated companies like Nest. The Silicon Valley giant collects data from twice the number of websites as its closest competitor, Facebook. An Alphabet spokesperson told The Markup that internet users can go here if they want to opt out of the company showing them targeted ads based on their browsing history.
The site sent information to the following domains doubleclick.net, google-analytics.com, google.com, googleadservices.com, googleoptimize.com, googletagmanager.com.
#vpn #tracking #security #linux #openvpn #wireguard #privacy #expressvpn #bsd #solaris #google #facebook #microsoft #hackernews #blacklight #alphabet #meta #marketing #trackers #trackingpixel
“The level of detail is shocking for a company like Apple,” Mysk told Gizmodo."
WTF!?
"An independent test suggests Apple collects data about you and your phone when its own settings promise to “disable the sharing of Device Analytics altogether.”"
New Research Says
"For all of Apple’s talk about how private your iPhone is, the company vacuums up a lot of data about you. iPhones do have a privacy setting that is supposed to turn off that tracking. According to a new report by independent researchers, though, Apple collects extremely detailed information on you with its own apps even when you turn off tracking, an apparent direct contradiction of Apple’s own description of how the privacy protection works."
https://gizmodo.com/apple-iphone-analytics-tracking-even-when-off-app-store-1849757558
#tracking #apple #iphone #surveillance #prism #linux #bsd #gnulinux #safari #gizmodo #security #hackernews #analytics #privacy #computer #smartphones #phones #phone #spying #backdoor
Autosummarized HN: Hacker News summarized by an AI (specifically GPT-3). The system grabs the top 30 HN posts once every 24 hours (at 16:00 UTC), which are then reviewed by a human (Daniel Janus) to make sure none of the content violates the OpenAI content policy before being published on the site. Only guaranteed to run for August of 2022, because he has to pay the OpenAI bill and the site is not monetized. If you want it to run for longer, you'll have to get the code (it's open source -- written in Clojure) and get permission from OpenAI to run your own version of the site (and pay the OpenAI API bill).
Autosummarized HN: Hacker News summarized by an AI (specifically GPT-3). The system grabs the top 30 HN posts once every 24 hours (at 16:00 UTC), which are then reviewed by a human (Daniel Janus) to make sure none of the content violates the OpenAI content policy before being published on the site. Only guaranteed to run for August of 2022, because he has to pay the OpenAI bill and the site is not monetized. If you want it to run for longer, you'll have to get the code (it's open source -- written in Clojure) and get permission from OpenAI to run your own version of the site (and pay the OpenAI API bill).
Autosummarized HN: Hacker News summarized by an AI (specifically GPT-3). The system grabs the top 30 HN posts once every 24 hours (at 16:00 UTC), which are then reviewed by a human (Daniel Janus) to make sure none of the content violates the OpenAI content policy before being published on the site. Only guaranteed to run for August of 2022, because he has to pay the OpenAI bill and the site is not monetized. If you want it to run for longer, you'll have to get the code (it's open source -- written in Clojure) and get permission from OpenAI to run your own version of the site (and pay the OpenAI API bill).
● NEWS ● #JeffGeerling #Censorship ☞ I almost got banned from #HackerNews https://www.jeffgeerling.com/blog/2022/i-almost-got-banned-hacker-news
" #HackerNews is supposed to be "rational" and "evidence-based", but it's a hive of virus misinfo" http://techrights.org/irc-archives/irc-log-techrights-190821.html#tAug%2019%2016:30:49
#DevKundaliya and #HackerNews play along and go along with this laughable lie that #Microsoft is some kind of #security expert with moral authority/credibility on this subject http://techrights.org/2021/07/26/microsoft-linux-fud/
TL;DR: In assessing relative risk status, the future must be considered, not simply the present.
An HN thread[0] discusses whether the US or Europe are experiencing a worse Covid situation. The question contains nuances and pitfalls, though the general answer seems to be:
Covid and population here come from Worldometers.
The thread begins with Aperocky's comment asserting, correctly, that "The worst hit place right now is the ~United States of America."
Responding, esja asserts "this is not true", though doesn't clarify their redefinition of "worst hit", for another two rounds of discussion, finally settling on "deaths today".
That basis is fatally (so to speak) flawed as it entirely dismisses the facts that:
Cases today translate directly to deaths in the 2--4 week future, at a best-case rate of 0.5% CFR and far more plausibly 1.5--3% CFR, based on present reported cases.[1]
US new cases per capita are at least on par if not worse than Europes's.
Europe's daily case rates are trending at worst flat, and are generally decreasing.
US case rates are rising, at an acellerating rate.
The US today reports 158,363 new cases (7-day average), and a 3% CFR. In ~2--3 weeks, likely daily deaths will be 2,375--4,750, or 7.5--15 per million.[2]
Germany, to use esja's favoured example, reports 18,363 new cases (7-day average), and a 2% CFR. In ~2--3 weeks, likely daily deaths will be 367--550, 4.4--6.6 per million.[3]
All Europe reports ~220,000 new daily cases (16 Nov 2020, not smoothed). in ~2--3 weeks, likely daily deaths will be 3,300--6,600, 4.4--8.8 per million.[4]
In all cases, baked-in future daily US mortality rates will be roughly twice those of Europe, adjusted for population and are trending still further worse. The US 'benefits' only by having begun its annual seasonal coronavirus peak some 4--8 weeks later than Europe, with an European inflection beginning in September--October and a US inflection beginning October--November.
To provide an analogy, esja is laughing at Europe being in a ditch whilst the US is racing toward a cliff's edge. Assessments of present health or wealth must include obvious future consequences or risks. Critics of EU response entirely ignore these, and reframe the initial criterion to do so.
Such analysis suffers from presentism and risk blindness and is utterly flawed.
Adapted from HN comments to the thread linked above.
Notes:
Beginning here: https://news.ycombinator.com/item?id=25113115
I'm ignoring the fact that reported fatalities undercount true COVID-19 fatalities as demonstrated by overall excess deaths by about 30% per an August 2020 New York Times report and other independent studies and data. This is a largely global bias, doesn't affect inter-regional comparisons, simplifies analysis, and strengthens my argument as the case I present, bleak as it is, is less severe than the actual reality
Using 1.5--3% CFR.
Also using 1.5%--35 CFR, despite Germany's lower experienced CFR.
Worldometers does not provide continental/regional plots or smoothed trends, though law-of-large-numbers helps somewhat. Again at 1.5--3% CFR, based on reported values, whic undercounts recoveries, experienced CFR is ~4%. Using a non-smoothed current high-point number further overstates total European future mortality relative to the US.
#covid19 #UnitedStates #europe #CriticalThinking #FlawedArguments #HackerNews #risk #worldometers
I don't like Android and using Google Android is simply dumb. It just makes no sense to present one's data on a silver tablet to Google. Ungoogled Android has a big advantage though: It has many apps. Here are some of them, which were mentioned in a video by mobilsicher (if I remember correctly). These are from f-droid.
These are useful apps. E.g. for checking system connections Net Monitor (from SECUSO) is helpful. Or for just for a simple test, for example: In "Settings/Apps/Signal/Data Usage" activate the last three options (blocking mobile, wifi and vpn connections at the same time).
Check via Net Monitor, if the app's connections are indeed blocked (not showing via Net Monitor, swipe down to refresh).
#android #apps #androidapps #gnu #linux #gnulinux #vlc #security #blokada #privacy #netmonitor #afwall #checkey #hackernews #signal #briar #xmpp #conversations #pixelwheels #towerjumper #classyshark #exodusprivacy #skytube #newpipe #radiodroid #aegis #osmand
I've become increasily aware of how conversation medium and participants shapes the "quality" of conversation.
Conversation scales poorly.
It's also fragile and very easily destroyed, discouraged, or dissuaded.
The biggest issue I find on Reddit itself is that there's no notion of "thread (or post) as conversation". And absolutely no support for same. Reddit is where interesting conversations go to die.
An item is posted. It's at top-of-page for ... a few minutes or hours, possibly days ... then vanishes And no amount of activity within a thread will boost it, generally. Even those who'd participated in the discussion have no signal of any activity. The best that can happen is that members might subscribe to replies for two days. This is madness.
Put another way, Reddit's post-weighting algorithm is all but entirely determined by posting time, not activity recency. This avoids "necroposting", for both good and bad. For small niche discussion, all but entirely bad.
Problem is that Reddit's scale spans about 6-8 orders of magnitude -- subredduts of < 10 members, to > 10,000,000. One-size-fits all ... wears poorly. Most of the glaring problems are at large scale. The small subs get neglected. Clue flees.
The little-lamented Imzy had the problem of seeing Reddit's problems-at-scale, whilst utterly failing to grasp its own failures-at-inception --- no scale --- and failing to address those. Put another way, how you get to scale, by solving the problems of inception, teaches you nothing about hoe to survive at scale. The problems are entirely different.
As noted at HN, for all its copious faults, Google+ solved this particular problem well. Facebook may also (I don't use it). Microblogging platforms (Twitter, Mastodon, Fediverse) at least present individual posts within a thread well, though they seem to uniformly suck at actual threading (see: Threadreader). Diaspora ... kind of does this but was an immensely clunky slow interface for notifications & response.
But yes, as McLuhan said, "the medium is the message". It has profound impacts and influences, most not immediately apparent -- they're emergent properties.
Independent of medium, scale, expressive richness (e.g., markdown, multimedia), latency, arity, ephemerality / permanence, message size, moderation (leaf-node or trunk), culture, founding cohort, exogenous vs. endogenous motivators and incentives (or demotivators and disincentives), editability/revisability, search, organisation and management tools, protocols and standards, and much more, all matter.
I've discussed some of this at the (rather neglected) discussion of social media types and characteristics at Plexodus Wiki, see especially Platform Types and Features and Capabilities.
Adapted from a private Reddit discussion.
#media #conversations #generativity #MarshallMcLuhan #reddit #twitter #hackernews #mastodon #fediverse #diaspora #usenet #moderation #googlplus #gplus #plexodus #plexoduswiki
See: https://news.ycombinator.com/item?id=22038065
I'm predicting you'll reject this.
You're wrong to do so.
Basic HTML functionality is a cornerstone of the Web.
Practices have been evolving away from this.
My experience is that practices, in the absence of consequence, will devolve to a minimum viable standard (see "Tyranny of the Minimum Viable User" for a similar dynamic), and that there are manifest dyamics in otherwise unregulated markets which will increase negative experiences to users, in part because this is an effective market segmentation technique. See my recent HN comments on Jules Dupuit (or look up his commentary on railroad carriage accommodations by class).
HN is a small but significant player. It has an outsized influence over startups, online technologists, web designers, and media entities whose content is linked to the site. Shifting the needle, even slightly, in this calculus, would be a net positive.
The practice will make it easier for users of systems which don't function without JS dependencies to request graceful fallback.
The impacts on HN of various user-hostile practices (paywalls, JS requirements, autoplay video/audio, etc.) also have a significant knock-on negative impact on the experience of reading HN itself, both through the inaccessibility of content, and the inevitable sidetracking discussions on how to overcome such malfeasance.
My suggested implementation is to institute site bans based on reports / awareness, and to leave those bans in effect until the problem can be verified to be fixed. That is: the system needn't be perfect, but it should exist, bans should be instituted when requested, and sites themselves must take positive action to see them lifted.
(Multi-strike treatments for repeat offenders / recidivists / systems incompentent against regressions is an additional question, I'd recommend a once-every-six-months appeal for such cases.)
Please do the right thing.
-- Edward Morbius (dredmorbius@protonmail.com) Dr. Edward Morbius's Lair of the Id https://dredmorbius.reddit.com
Sent from ProtonMail mobile
is an open source webapp. invidio.us offers an alternative front-end to YouTube
https://github.com/omarroth/invidious
It says on the page: "Invidious is what YouTube should be."
https://www.reddit.com/r/SideProject/comments/8wvazc/invidous_alternative_frontend_to_youtube/
https://www.youtube.com/watch?v=kMOWCZkU_QM
https://www.invidio.us/watch?v=kMOWCZkU_QM
If you right click the video, there is a "Save video as" option.
The site also works without javascript.
There are add-ons available which redirect Youtube URLs to invidio.us (userscript) or replace YouTube embeds with invidio.us embeds (userscript).
#invidio.us #invidious #youtube #hooktube #google #gevil #linux #gnu #gnulinux #hackernews #security #privacy #javascript #noscript #scriptsafe #video #videos
Let's start with a translation of this four years old post: https://diasp.org/posts/3834747
There is not much buzz about Seamonkey, which is not a well known browser but yet full of qualities. This is the legacy of the old Mozilla suite, which at the time included a web browser, an email client, and other things. Seamonkey is the independent resumption of this project since Mozilla devoted itself to Firefox and Thunderbird.
The software therefore consists of a Web browser equipped with Gecko, an e-mail client, an address book and an IRC client. The whole is extensible and based on modern technologies imported from other Mozilla products.
What I particularly like about Seamonkey is that the user interface is classic. In fact, it has hardly moved since the time of the Mozilla suite. We always have a menu bar and item bars that we can customize at will. The bars can be retracted and removed. Themes management is available. It is less flashy than the latest Firefox whose visual aspect frustrates me.
Seamonkey takes up in a very classical way the canons of the old Mozilla, but behind it there are modern technologies hidden and subtle evolutions like the recent Gecko engine, the synchronization of the bookmarks or the address bar which also makes searches.
Update: Recent versions do not include the IRC client (ChatZilla), but it can be installed as an extension.
http://www.seamonkey-project.org // Project News // July 27, 2018 // SeaMonkey 2.49.4 released
There are downloads on the official site or there are also unofficial builds available here:
http://www.wg9s.com/comm-257/
Consider - As it says on this page: "the official Linux builds only require glibc version 2.12 (libc-2.12.so) and stdcxx version 3.4.16 (libstdc++.so.6.0.16) or later, the Linux builds provided on wg9s.com require glibc version 2.18 (libc-2.18.so) and stdcxx version 3.4.23 (libstdc++.so.6.0.23) or later."
Check:
- strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX
I'm not sure about Ubuntu, but on Debian Stretch it's GLIBCXX 3.4.22 atm (if I'm not mistaken the wg9s build won't work).
After downloading and extracting the file (which is around 50 MB) from seamonkey-project.org some more packages are required to get it run (after reading the error messages these were the ones I could identify):
- sudo apt install libstdc++-6-dev
- sudo apt install lib32stdc++6
- sudo apt install libgtk-3-0:i386
- sudo apt install libasound2:i386
- sudo apt install libdbus-glib-1-2:i386
- sudo apt install libxt6:i386
Seamonkey starts fine from userspace (did not work for me from /opt/seamonkey).
It looks a bit outdated but actually I don't mind as the theme can be changed anyway. Btw Seamonkey is the default browser on the distros LXLE and Puppy.
#seamonkey #browser #mozilla #gecko #thunderbird #firefox #emailclient #irc #linux #gnu #gnulinux #lxle #puppy #puppylinux #hackernews
The DDG search engine is trialing a new design for their search page.
There's active feedback being solicited via Hacker News and on DDG itself.