With the rise of these retro-looking websites, I feel it's possible again to start using a browser from the '90s. Someone should make a static-site social media platform for full compatibility.
Not so much. While a lot of these websites use classic approaches (handcrafted HTML/CSS, server-side includes, etc.) and aesthetics, the actual versions of those technologies used are often rather modern. For example, TFA looks like a page I'd have browsed in IE5 as a kid, but if you look at the markup, it's using HTML5 tags and Flexbox (which became a W3C WR in 2017), while a period site would have used an HTML table to get the same effect. Of course, you wouldn't want to do it that way nowadays, because it wouldn't be responsive or mobile-friendly.
(I don't think this detracts from such sites, to be clear; they're adopting new technologies where they provide practical benefits to the reader because many indieweb proponents are pushing it as a progressive, rather than reactionary, praxis.)
A couple years ago I made this https://bootstra386.com/ ... it's for a project. This is genuinely 1994 style with 1994 code that will load on 1994 browsers. It doesn't force SSL, this does work. I made sure of it.
The CSS on the page is only to make modern browsers behave like old ones in order to match the rendering.
The guestbook has some javascript if you notice to defeat spam: https://bootstra386.com/guestbook.html but it's the kind of javascript that netscape 2.0 can run without issue.
> This is genuinely 1994 style with 1994 code that will load on 1994 browsers.
Unfortunately it won’t, at least not when you’re serving it with that configuration.
It uses what used to be called “name-based virtual hosting” (before it became the norm), which looks at the Host request header to determine which site to serve. Internet Explorer 3, released in 1996, was the first version of Internet Explorer to send a Host header. I think Netscape 3, also released in 1996, might’ve been the first version to support it as well. So, for instance, Internet Explorer 2.0, released in 1995, will fail to load that site at that URL. If you test locally with localhost, for instance, then this problem won’t be apparent, because you aren’t using named-based virtual hosting in that situation.
If you need to support early-1996 browsers and older, then your site needs to be available when you request it without any Host header. In most cases, you can test this by using the IP address in your browser location bar instead of the hostname.
Edit:
At one point around 1998, it wasn’t possible to directly install Internet Explorer 4 on Windows NT 4, because it shipped with Internet Explorer 2 and microsoft.com used name-based virtual hosting, or at least their downloads section did. So the method to install Internet Explorer 4 on Windows NT 4 was to use Internet Explorer 2 to download Netscape Navigator 4, and then use Netscape Navigator 4 to download Internet Explorer 4.
Using the IP address is a tricky one for something that is supposed to be Internet facing in the 2020s.
In the modern world, one common probe performed by attackers is to see whether a site responds with its own IP address in the Host: header, or the address-to-name lookup result of the IP address in the DNS, or the well-known defaults of some WWW servers.
What they're relying upon, of course, is people/softwares allowing IP addresses and the reverse lookup domain names, but forgetting to install security controls for those as virtual hosts.
Or, equally as bad, the fallback if no Host: header is supplied being a private/internal WWW site of some kind.
> For example, TFA looks like a page I'd have browsed in IE5 as a kid, but if you look at the markup, it's using HTML5 tags and Flexbox (which became a W3C WR in 2017), while a period site would have used an HTML table to get the same effect.
Are they going out of their way to recreate an aesthetic that was originally the easiest thing to create given the language specs of the past, or is there something about this look and feel that is so fundamental to the idea of making websites that basically anything that looks like any era or variety of HTML will converge on it?
I'm happy they didn't choose to go full authentic with quirks mode and table-based layouts, because Firefox has some truly ancient bugs in nested table rendering... that'll never get fixed, because... no one uses them anymore!
I think the layout as such (the grid of categories) isn't particularly dated, though a modern site would style them as tiles. The centered text can feel a little dated, but the biggest thing making it feel old is that it uses the default browser styles for a lot of page elements, particularly the font.
I think it’s the former. Many of these retro layouts are pretty terrible. They existed because they were the best at the time, but using modern HTML features to recreate bad layouts from the last is just missing the point completely.
This is totally doable! It can be done with static sites + rss (and optionally email).
For example, I do this with my website. I receive comments via email (with the sender’s addresses hashed). Each page/comment-list/comment has its own rss feed that people can “subscribe” to. This allows you to get notified when someone responds to a comment you left, or comments on a page. But all notifications are opt-in and require no login because your rss reader is fetching the updates.
Since I’m the moderator of my site, I subscribe to the “all-comments” feed and get notified upon every submission. I then go review the comment and then the site rebuilds. There’s no logins or sign ups. Commenting is just pushing and notifications just pulling.
I plan on open sourcing the commenting aspect of this (it’s called https://r3ply.com) so this doesn’t have to be reinvented for each website, but comments are just one part of the whole system:
The web is the platform. RSS provides notifications (pull). Emailing provides a way to post (push) - and moderate - content. Links are for sharing and are always static (never change or break).
The one missing thing is like a “pending comments” cache, for when you occasionally get HN like traffic and need comments to be temporarily displayed immediately. I’m building this now but it’s really optional and would be the only thing in this system that even requires JS or SSR.
It does not work for people who are using web interface of e-mail only. It would be nice to provide textual instructions (sent this subject to this e-mail) instead of mailto links only.
I really like that idea. I need to add it to my own site to test it out and let it bake.
Do you think think this would work: a little icon that opens a pure html disclosure element with instructions and a design with text laid out sort of in the shape of an email.
“(Text only instructions) Send an email like this:
To: <site>@r3pl.com
Subject: <page_or_comment_url>
Body:
<write your comment here, be careful to not accidentally leave your email signature>”
Your comment system is fantastic. Looking for something like this literally for decades. Hope you will open source it soon. I would like to use it with my blog.
I loaded up Windows 98SE SP2 in a VM and tried to use it to browse the modern web but it was basically impossible since it only supported HTTP/1.1 websites. I was only able to find maybe 3-4 websites that still supported it and load.
In theory, yes, although there are some fairly big stones falling in the avalanche of turning off HTTP/0.9 and HTTP/1.0 at the server end.
In practice, it's going to be tricky to know without measurement; and the shifting of the default at the client end to from 0.9 and 1.0 to 1.1 began back in 2010. Asking the people who run robots for statistics will not help. Almost no good actor robots are using 0.9 and 1.0 now, and 0.9 and 1.0 traffic dropped off a cliff in the 2010s falling to 0% (to apparently 1 decimal place) by 2021 as measured by the Web Almanac.
If a modern HTTP server stopped serving 0.9 and 1.0, or even just had a problem doing so to decades-old pre-1.1 client softwares, very few people would know. Almost 0% of HTTP client traffic would be affected.
And, indeed, http://url.town/ is one of the very places that has already turned 0.9 off. It does not speak it, and returns a 1.1 error response. And no-one in this thread (apart from edm0nd) knew.
I tried old macOS ... sorry, Mac OS ... and yeah the main problem was SSL/TLS. HTTP/1.0 was fine but the SSL crypto algorithm negotiation never went through.
If your definition of social-media includes link aggregators, check https://brutalinks.tech. I've been working on things adjacent to that for quite a while now and I'm always looking for interested people.
The biggest issue there is that regardless of how your old your html elements, the old browsers only supported SSL 2/3, at best, and likely nothing at all, meaning you can't connect to basically any website.
(For the youth, this is basically what Yahoo was, originally; it was _ten years_ after Yahoo started before it had its own crawler-based search engine, though it did use various third parties after the first few years.)
(I recall too that when Yahoo did add their own web crawler, all web devs did was add "Pamela Anderson" a thousand times in as meta tags in order to get their pages ranked higher. Early SEO.)
This is cute, but I absolutely do not care about buying a omg.lol URL for $20/yr, and I'm not trying to be a hater because the concept is fine, but anybody who falls into this same boat should know this is explicitly "not for them"
While I'm usually one of those who complain about subscription services, $20 per year is not considerably more than registering a .com with the whois protection. Given that you get a registered, valid domain name that you have control over, it's not a bad deal. Also, it does help filter out low effort spam, especially if they decided to add a limit to allow only n registrations per a credit card should it become a problem.
We're always discussing something along "if you're not paying for it, you're the product" in the context of social media, yet now we're presented a solution and criticize that it's not free.
You can also roll your own webring/directory for free on your ISP's guest area (if they still offer that) and there's no significant network effect to url.town yet that would make you miss out if you don't pay.
I hadn't realised that this was tied to omg.lol until your comment but now I'm confused. If it's from the omg.lol community, how come the address isn't something like url.omg.lol? (ie. it's a community around a domain, why isn't that doimain used here?)
I don't think pointing out "this is a web directory full of links submitted by people willing to spend $20/yr" is being cheap, per se, the same way I don't think paying to be "verified" on Twitter means your content is worth paying attention to
There was a time where "willing to pay for access" was a decent spam control mechanism, but that was long ago
Agree. Recently I’ve noticed the complaints with paying for Kagi search [0]. HN loves to moan about how bad Google is but paying $10 ($5 if you want a tiny plan) is apparently too much for something as critical as search?
As you say, those coffees seem to keep on selling…
Everyone wants a Starbucks coffee per month from you. Even if you're on FAANG compensation, there's a finite number of coffees you can afford to pay for.
If you’re on FANG compensation, and you earn roughly $200k after taxes in one year, and you spent it all of it on Starbucks coffee, you could buy roughly a century worth of coffee if you drink one a day.
If on the other hand, you spent the $200k on leasing an omg.lol domain in perpetuity, you could hold the domain for 10 millenniums.
If we were in the Dune universe, that means your omg.lol domain would expire roughly around the same time as the Butlerian Jihad starts and the thinking machines are overthrown.
Having studied, and attempted to build, a few taxonomies / information hierarchies myself (a fraught endeavour, perhaps information is not in fact hierarchical? (Blasphemy!!!)), I'm wondering how stable the present organisational schema will prove, and how future migrations might be handled.
Unexpectedly related to the problem of perfect classification is McGilchrist’s The Master and His Emissary. It shows that human mind is a duet where each part exhibits a different mode of attending to reality: one seeks patterns and classifies, while the other experiences reality as indivisible whole. The former is impossible to do “correctly”[0]; the latter is impossible to communicate.
(As a bit of meta, one would notice how in making this argument it itself has to use the classifying approach, but that does not defeat the point and is rather more of a pre-requisite for communicating it.)
Notably, the classifying mode was shown in other animals (as this is common to probably every creature with two eyes and a brain) to engage when seeking food or interacting with friendly creatures. This highlights its ultimate purposes—consumption and communication, not truth.
In a healthy human both parts act in tandem by selectively inhibiting each other; I believe in later sections he goes a bit into the dangers of over-prioritizing exclusively the classifying part all the time.
Due to the unattainability of comprehensive and lossless classification, presenting information in ways that allows for coexistence of different competing taxonomies (e.g., tagging) is perhaps a worthy compromise: it still serves the communication requirement, but without locking into a local optimum.
[0] I don’t recall off the top of my head exactly how Iain gets there (there is plenty of material), but similar arguments were made elsewhere—e.g., Clay Shirky’s points about the inherent lossiness of any ontology and the impossible requirement to be capable of mind reading and fortune telling, or I personally would extrapolate a point from the incompleteness theorem: we cannot pick apart and formally classify a system which we ourselves are part of in a way that is complete and provably correct.
Yes, the seeming hierarchy in information is bit shallow. Yahoo, Altavista and others tried this and it became unmanageable soon. Google realized that keywords and page-raking is the way to go. I think keywords are sort of same as a dimensions in multi-dimensional embeddings.
Information, is basically is about relating something to other known things. A closer relation is being interpreted as location proximity in a taxonomy space.
The US Library of Congress is an interesting case study to my mind. The original classification scheme came from Thomas Jefferson's private library (he donated the collection to the US Government after the original Library of Congress was burned in 1812. The classification has been made more detailed (though so far as I know the original 20 alphabetic top-level classes remain as Jefferson established them), and there's been considerable re-adjustment, as knowledge, mores, and the world around us have changed. The classification has its warts, but it's also very much a living process, something I feel is greatly underappreciated.
At the same time, the Library also has its equivalent of keywords, the Library of Congress Subject Headings. Whilst a book or work will have one and only one Classification assigned to it (the Classification serving essentially as an index and retrieval key), there may be multiple Subject Headings given (though typically only a few, say 3--6 for a given work). These are used to cross-reference works within the subject index.
The Subject Headings themselves date to 1898, and there is in fact an article on the ... er ... subject, "The LCSH Century: A Brief History of the Library of Congress Subject Headings, and Introduction to the Centennial Essays" (2009), I'm just learning as I write this comment:
Nice website. But do I need to buy a omg.lol subdomain before I can contribute links here? Why is it an omg.lol subdomain? I'm happy to buy a new domain, but not so happy about buying a subdomain. I'm not sure why I'd be paying omg.lol to contribute links to url.town? What's the connection between the two?
Now its own Hacker News submission, with many concluding that it is entirely LLM-generated content and thus highly suspect for any kind of accuracy at all.
Sadly it's the same for Sci-Fi art. I had a link to submit, but you need to sign up and it's $20. Fair enough if they want to set some minimum barrier for the site to filter out suggestions from every Tom, Dick, and Harry (and Jane?), but I don't feel so investing in this to give them $20 to provide a suggestion.
The fact that it already has categories for most hobbies but absolutely nothing for cars, motorbikes, or any mechanical engineering-related topic, makes me sad. I know it's not their fault - young people simply don't care anymore.
... Possibly I'm missing something, but currently it has four categories under "Hobbies"; folklore, Pokemon, travel and yarn craft. Are you suggesting that if someone added "car stuff", that would be, well, basically complete, the big five hobbies represented?
It's clearly extremely new and has almost no content as yet.
Someone wants to add it enough to click the button that adds the site. Sometimes you need to REALLY want to add it because no category is applicable so you also click the button to add the category.
Cool, but I'd like us to get past the idea that a site has to use Times font to be retro.
Times is really not adapted for the web and is particularly bad on low-resolution screens. How many computer terminals used Times for anything but Word processing?
Verdana was released in 1996 — is that too recent?
> I think they meant that Verdana is a nice replacement for Times if you care about readability and presentation.
But that's not true at all! Maybe the point was that web should use sans-serif instead of serif, but that statement (and I hope I'm not making a straw man here) is as a blanket invalid as well (in my opinion, ofc).
In the true spirit of the old web, you can adjust the default font in your browser's preferences to any font you prefer and the page respects it, as it doesn't specify what font to use at all.
With the rise of these retro-looking websites, I feel it's possible again to start using a browser from the '90s. Someone should make a static-site social media platform for full compatibility.
Not so much. While a lot of these websites use classic approaches (handcrafted HTML/CSS, server-side includes, etc.) and aesthetics, the actual versions of those technologies used are often rather modern. For example, TFA looks like a page I'd have browsed in IE5 as a kid, but if you look at the markup, it's using HTML5 tags and Flexbox (which became a W3C WR in 2017), while a period site would have used an HTML table to get the same effect. Of course, you wouldn't want to do it that way nowadays, because it wouldn't be responsive or mobile-friendly.
(I don't think this detracts from such sites, to be clear; they're adopting new technologies where they provide practical benefits to the reader because many indieweb proponents are pushing it as a progressive, rather than reactionary, praxis.)
A couple years ago I made this https://bootstra386.com/ ... it's for a project. This is genuinely 1994 style with 1994 code that will load on 1994 browsers. It doesn't force SSL, this does work. I made sure of it.
The CSS on the page is only to make modern browsers behave like old ones in order to match the rendering.
The guestbook has some javascript if you notice to defeat spam: https://bootstra386.com/guestbook.html but it's the kind of javascript that netscape 2.0 can run without issue.
> This is genuinely 1994 style with 1994 code that will load on 1994 browsers.
Unfortunately it won’t, at least not when you’re serving it with that configuration.
It uses what used to be called “name-based virtual hosting” (before it became the norm), which looks at the Host request header to determine which site to serve. Internet Explorer 3, released in 1996, was the first version of Internet Explorer to send a Host header. I think Netscape 3, also released in 1996, might’ve been the first version to support it as well. So, for instance, Internet Explorer 2.0, released in 1995, will fail to load that site at that URL. If you test locally with localhost, for instance, then this problem won’t be apparent, because you aren’t using named-based virtual hosting in that situation.
If you need to support early-1996 browsers and older, then your site needs to be available when you request it without any Host header. In most cases, you can test this by using the IP address in your browser location bar instead of the hostname.
Edit:
At one point around 1998, it wasn’t possible to directly install Internet Explorer 4 on Windows NT 4, because it shipped with Internet Explorer 2 and microsoft.com used name-based virtual hosting, or at least their downloads section did. So the method to install Internet Explorer 4 on Windows NT 4 was to use Internet Explorer 2 to download Netscape Navigator 4, and then use Netscape Navigator 4 to download Internet Explorer 4.
Using the IP address is a tricky one for something that is supposed to be Internet facing in the 2020s.
In the modern world, one common probe performed by attackers is to see whether a site responds with its own IP address in the Host: header, or the address-to-name lookup result of the IP address in the DNS, or the well-known defaults of some WWW servers.
What they're relying upon, of course, is people/softwares allowing IP addresses and the reverse lookup domain names, but forgetting to install security controls for those as virtual hosts.
Or, equally as bad, the fallback if no Host: header is supplied being a private/internal WWW site of some kind.
I ran ns2 on this in testing it two years ago. I'll check again this week.
> For example, TFA looks like a page I'd have browsed in IE5 as a kid, but if you look at the markup, it's using HTML5 tags and Flexbox (which became a W3C WR in 2017), while a period site would have used an HTML table to get the same effect.
Are they going out of their way to recreate an aesthetic that was originally the easiest thing to create given the language specs of the past, or is there something about this look and feel that is so fundamental to the idea of making websites that basically anything that looks like any era or variety of HTML will converge on it?
I'm happy they didn't choose to go full authentic with quirks mode and table-based layouts, because Firefox has some truly ancient bugs in nested table rendering... that'll never get fixed, because... no one uses them anymore!
I think the layout as such (the grid of categories) isn't particularly dated, though a modern site would style them as tiles. The centered text can feel a little dated, but the biggest thing making it feel old is that it uses the default browser styles for a lot of page elements, particularly the font.
I think it’s the former. Many of these retro layouts are pretty terrible. They existed because they were the best at the time, but using modern HTML features to recreate bad layouts from the last is just missing the point completely.
They’re making their own point. This is a document as a piece of expression and communication, not pure utility.
This is totally doable! It can be done with static sites + rss (and optionally email).
For example, I do this with my website. I receive comments via email (with the sender’s addresses hashed). Each page/comment-list/comment has its own rss feed that people can “subscribe” to. This allows you to get notified when someone responds to a comment you left, or comments on a page. But all notifications are opt-in and require no login because your rss reader is fetching the updates.
Since I’m the moderator of my site, I subscribe to the “all-comments” feed and get notified upon every submission. I then go review the comment and then the site rebuilds. There’s no logins or sign ups. Commenting is just pushing and notifications just pulling.
example https://spenc.es/updates/posts/4513EBDF/
I plan on open sourcing the commenting aspect of this (it’s called https://r3ply.com) so this doesn’t have to be reinvented for each website, but comments are just one part of the whole system:
The web is the platform. RSS provides notifications (pull). Emailing provides a way to post (push) - and moderate - content. Links are for sharing and are always static (never change or break).
The one missing thing is like a “pending comments” cache, for when you occasionally get HN like traffic and need comments to be temporarily displayed immediately. I’m building this now but it’s really optional and would be the only thing in this system that even requires JS or SSR.
It does not work for people who are using web interface of e-mail only. It would be nice to provide textual instructions (sent this subject to this e-mail) instead of mailto links only.
I really like that idea. I need to add it to my own site to test it out and let it bake.
Do you think think this would work: a little icon that opens a pure html disclosure element with instructions and a design with text laid out sort of in the shape of an email.
“(Text only instructions) Send an email like this:
To: <site>@r3pl.com
Subject: <page_or_comment_url>
Body: <write your comment here, be careful to not accidentally leave your email signature>”
Your comment system is fantastic. Looking for something like this literally for decades. Hope you will open source it soon. I would like to use it with my blog.
Thank you so much. It should be ready by the end of this month actually. I’ll post on HN a link to the docs then.
It looks very interesting. My main concern would be attempting to filter spam comments, which invariably end up ruining these types of things.
> The web is the platform.
I like your thinking. Beautiful website, by the way!
I loaded up Windows 98SE SP2 in a VM and tried to use it to browse the modern web but it was basically impossible since it only supported HTTP/1.1 websites. I was only able to find maybe 3-4 websites that still supported it and load.
I would expect your main problem to be SSL/TLS. As far as I know, even modern web servers have no problem serving content to HTTP/1.0 clients.
In theory, yes, although there are some fairly big stones falling in the avalanche of turning off HTTP/0.9 and HTTP/1.0 at the server end.
In practice, it's going to be tricky to know without measurement; and the shifting of the default at the client end to from 0.9 and 1.0 to 1.1 began back in 2010. Asking the people who run robots for statistics will not help. Almost no good actor robots are using 0.9 and 1.0 now, and 0.9 and 1.0 traffic dropped off a cliff in the 2010s falling to 0% (to apparently 1 decimal place) by 2021 as measured by the Web Almanac.
* https://almanac.httparchive.org/en/2021/http
If a modern HTTP server stopped serving 0.9 and 1.0, or even just had a problem doing so to decades-old pre-1.1 client softwares, very few people would know. Almost 0% of HTTP client traffic would be affected.
And, indeed, http://url.town/ is one of the very places that has already turned 0.9 off. It does not speak it, and returns a 1.1 error response. And no-one in this thread (apart from edm0nd) knew.
I tried old macOS ... sorry, Mac OS ... and yeah the main problem was SSL/TLS. HTTP/1.0 was fine but the SSL crypto algorithm negotiation never went through.
Try Retrozilla
https://portal.mozz.us/gopher/gopher.somnolescent.net/9/w2kr...
with these NEW values in about:config set to true:
Also, set these to false:If your definition of social-media includes link aggregators, check https://brutalinks.tech. I've been working on things adjacent to that for quite a while now and I'm always looking for interested people.
I made a twitter clone in PHP during the 00s, but sadly I don't have the code anymore... Although it should be pretty easy to replicate.
> Someone should make a static-site social media platform for full compatibility.
What do you mean by that? Especially the "social" part?
It exists today and is called Gemini.
The biggest issue there is that regardless of how your old your html elements, the old browsers only supported SSL 2/3, at best, and likely nothing at all, meaning you can't connect to basically any website.
> Someone should make a static-site social media platform for full compatibility.
Isn't that https://subreply.com/ ?
Argh, Yahoo is happening again!
(For the youth, this is basically what Yahoo was, originally; it was _ten years_ after Yahoo started before it had its own crawler-based search engine, though it did use various third parties after the first few years.)
It's like people are missing that first decade.
(I recall too that when Yahoo did add their own web crawler, all web devs did was add "Pamela Anderson" a thousand times in as meta tags in order to get their pages ranked higher. Early SEO.)
This reminds me more of the Open Directory Project (Dmoz.org), but I welcome it!
2010 archive of dmoz: https://web.archive.org/web/20100227212554/http://www.dmoz.o...
This is cute, but I absolutely do not care about buying a omg.lol URL for $20/yr, and I'm not trying to be a hater because the concept is fine, but anybody who falls into this same boat should know this is explicitly "not for them"
While I'm usually one of those who complain about subscription services, $20 per year is not considerably more than registering a .com with the whois protection. Given that you get a registered, valid domain name that you have control over, it's not a bad deal. Also, it does help filter out low effort spam, especially if they decided to add a limit to allow only n registrations per a credit card should it become a problem.
We're always discussing something along "if you're not paying for it, you're the product" in the context of social media, yet now we're presented a solution and criticize that it's not free.
You can also roll your own webring/directory for free on your ISP's guest area (if they still offer that) and there's no significant network effect to url.town yet that would make you miss out if you don't pay.
> ISP's guest area (if they still offer that)
What is (was) it? I can't find anything with a search (too many unrelated results).
Back in the day you could sometimes get hosting from your ISP. You might see URLs like `www.isp.com/~username` as examples of this.
I hadn't realised that this was tied to omg.lol until your comment but now I'm confused. If it's from the omg.lol community, how come the address isn't something like url.omg.lol? (ie. it's a community around a domain, why isn't that doimain used here?)
I think I pay around $100/year for my dirt cheap self-hosted stuff. So I mean you _can_ do it yourself, but $20 is pretty reasonable.
Just to be clear, $20/year is roughly one Starbucks drink per fiscal quarter.
Are you suggesting the market for omg.lol URLs intersects with the people who like to buy burnt coffee?
I only find it curious that there is just no limit to how cheap people on hackernews can be, despite being supposedly higher income earners.
Even if it was $10/year, people would still cry foul.
I don't think pointing out "this is a web directory full of links submitted by people willing to spend $20/yr" is being cheap, per se, the same way I don't think paying to be "verified" on Twitter means your content is worth paying attention to
There was a time where "willing to pay for access" was a decent spam control mechanism, but that was long ago
Agree. Recently I’ve noticed the complaints with paying for Kagi search [0]. HN loves to moan about how bad Google is but paying $10 ($5 if you want a tiny plan) is apparently too much for something as critical as search?
As you say, those coffees seem to keep on selling…
[0] https://kagi.com/pricing
Everyone wants a Starbucks coffee per month from you. Even if you're on FAANG compensation, there's a finite number of coffees you can afford to pay for.
If you’re on FANG compensation, and you earn roughly $200k after taxes in one year, and you spent it all of it on Starbucks coffee, you could buy roughly a century worth of coffee if you drink one a day.
If on the other hand, you spent the $200k on leasing an omg.lol domain in perpetuity, you could hold the domain for 10 millenniums.
If we were in the Dune universe, that means your omg.lol domain would expire roughly around the same time as the Butlerian Jihad starts and the thinking machines are overthrown.
Better yet people will still buy coffee they can't afford and complain about how they can't afford coffee.
FWIW I'm a happy paying customer of Kagi search for quite a while. I am very much not opposed to paying for things _if paying = value_
This :D
X is just one cappuccino, Y is just 3.5 bagels, Z costs not more than a pint, A costs almost as much as a nice meal … and so on. God's sake! :)
[dead]
[dead]
[dead]
[dead]
[dead]
Remember url.city? https://web.archive.org/web/20141122194515/https://dir.yahoo...
https://midnight.pub from https://nightfall.city it's cool too.
Having studied, and attempted to build, a few taxonomies / information hierarchies myself (a fraught endeavour, perhaps information is not in fact hierarchical? (Blasphemy!!!)), I'm wondering how stable the present organisational schema will prove, and how future migrations might be handled.
(Whether for this or comparable projects.)
<https://en.wikipedia.org/wiki/Taxonomy>
<https://en.wikipedia.org/wiki/Library_classification>
Clay Shirky's essay from 2005: Ontology is overrrated (centred on Yahoo!'s directory of links, oddly enough)
https://web.archive.org/web/20191117161738/http://shirky.com...
Unexpectedly related to the problem of perfect classification is McGilchrist’s The Master and His Emissary. It shows that human mind is a duet where each part exhibits a different mode of attending to reality: one seeks patterns and classifies, while the other experiences reality as indivisible whole. The former is impossible to do “correctly”[0]; the latter is impossible to communicate.
(As a bit of meta, one would notice how in making this argument it itself has to use the classifying approach, but that does not defeat the point and is rather more of a pre-requisite for communicating it.)
Notably, the classifying mode was shown in other animals (as this is common to probably every creature with two eyes and a brain) to engage when seeking food or interacting with friendly creatures. This highlights its ultimate purposes—consumption and communication, not truth.
In a healthy human both parts act in tandem by selectively inhibiting each other; I believe in later sections he goes a bit into the dangers of over-prioritizing exclusively the classifying part all the time.
Due to the unattainability of comprehensive and lossless classification, presenting information in ways that allows for coexistence of different competing taxonomies (e.g., tagging) is perhaps a worthy compromise: it still serves the communication requirement, but without locking into a local optimum.
[0] I don’t recall off the top of my head exactly how Iain gets there (there is plenty of material), but similar arguments were made elsewhere—e.g., Clay Shirky’s points about the inherent lossiness of any ontology and the impossible requirement to be capable of mind reading and fortune telling, or I personally would extrapolate a point from the incompleteness theorem: we cannot pick apart and formally classify a system which we ourselves are part of in a way that is complete and provably correct.
Yes, the seeming hierarchy in information is bit shallow. Yahoo, Altavista and others tried this and it became unmanageable soon. Google realized that keywords and page-raking is the way to go. I think keywords are sort of same as a dimensions in multi-dimensional embeddings.
Information, is basically is about relating something to other known things. A closer relation is being interpreted as location proximity in a taxonomy space.
Keywords also have their limitations.
The US Library of Congress is an interesting case study to my mind. The original classification scheme came from Thomas Jefferson's private library (he donated the collection to the US Government after the original Library of Congress was burned in 1812. The classification has been made more detailed (though so far as I know the original 20 alphabetic top-level classes remain as Jefferson established them), and there's been considerable re-adjustment, as knowledge, mores, and the world around us have changed. The classification has its warts, but it's also very much a living process, something I feel is greatly underappreciated.
At the same time, the Library also has its equivalent of keywords, the Library of Congress Subject Headings. Whilst a book or work will have one and only one Classification assigned to it (the Classification serving essentially as an index and retrieval key), there may be multiple Subject Headings given (though typically only a few, say 3--6 for a given work). These are used to cross-reference works within the subject index.
The Subject Headings themselves date to 1898, and there is in fact an article on the ... er ... subject, "The LCSH Century: A Brief History of the Library of Congress Subject Headings, and Introduction to the Centennial Essays" (2009), I'm just learning as I write this comment:
<https://www.tandfonline.com/doi/abs/10.1300/J104v29n01_01>
I think something similar was tried on everything2.com back in the day (2000ish).
Nice website. But do I need to buy a omg.lol subdomain before I can contribute links here? Why is it an omg.lol subdomain? I'm happy to buy a new domain, but not so happy about buying a subdomain. I'm not sure why I'd be paying omg.lol to contribute links to url.town? What's the connection between the two?
Yeah, so $20 to add a URL to url.town.
Anyone with an account already that wants to take requests for URLs to add?
(Hey, charge $1 a request and you should be able to break even on your $20 domain purchase before the day is up.)
I'm a member of omg.lol, and thus able to curate links for this directory.
I'll take requests, but I don't guarantee I'll add just anything.
Logins are built on https://home.omg.lol/ which is an amazing looking community!
Pretty on the nose that the only sports category is road cycling.
Just needs a Web Ring (:->
Kind of like the indieseek.xyz directory. Love to see it.
Neat - I wish it showed how many entries there are for each category. I was disappointed to see a Parenting category, with nothing in it.
This, in the family > activities category, looks excellent though: https://offline.kids
Now its own Hacker News submission, with many concluding that it is entirely LLM-generated content and thus highly suspect for any kind of accuracy at all.
* https://news.ycombinator.com/item?id=44789192
Yes, I submitted it
It does! Thanks for linking this - a quick browse revealed a few things my kid will be into.
Sadly it's the same for Sci-Fi art. I had a link to submit, but you need to sign up and it's $20. Fair enough if they want to set some minimum barrier for the site to filter out suggestions from every Tom, Dick, and Harry (and Jane?), but I don't feel so investing in this to give them $20 to provide a suggestion.
I clicked it too and was similarly disappointed. If you don't mind pasting it here I'd love to check it out and add it to my web index.
Hehe, we'll make our own url.town, with sci-fi art, and hookers!
https://www.simonstalenhag.se/
^ The link is for the sci-fi art, not the hookers.
Awesome work!
If you like that, wait till you see the hookers!
Lol, just kidding, that link is someone whose talent greatly exceeds my own.
[dead]
Clearly, if you want descendent nodes, you'll be looking for the "Child" or "Leafnode" category ;-)
that hits deep
The fact that it already has categories for most hobbies but absolutely nothing for cars, motorbikes, or any mechanical engineering-related topic, makes me sad. I know it's not their fault - young people simply don't care anymore.
Go make an account and add your own links =)
> most hobbies
... Possibly I'm missing something, but currently it has four categories under "Hobbies"; folklore, Pokemon, travel and yarn craft. Are you suggesting that if someone added "car stuff", that would be, well, basically complete, the big five hobbies represented?
It's clearly extremely new and has almost no content as yet.
What's the selection criteria for being listed on the directory?
Someone wants to add it enough to click the button that adds the site. Sometimes you need to REALLY want to add it because no category is applicable so you also click the button to add the category.
I'm not sure either; I know a couple of websites which fit perfectly
I just can’t stand reading serif fonts on a screen. The site is not compelling enough to power through the torture the font induces.
Cool, but I'd like us to get past the idea that a site has to use Times font to be retro.
Times is really not adapted for the web and is particularly bad on low-resolution screens. How many computer terminals used Times for anything but Word processing?
Verdana was released in 1996 — is that too recent?
Verdana is sans serif, so not a replacement for Times New Roman.
Also, the website styles don't specify font-family at all, so you are complaining about your own browser defaults.
I think they meant that Verdana is a nice replacement for Times if you care about readability and presentation.
Good pickup on the font being the default browser choice, I didn't notice that!
> I think they meant that Verdana is a nice replacement for Times if you care about readability and presentation.
But that's not true at all! Maybe the point was that web should use sans-serif instead of serif, but that statement (and I hope I'm not making a straw man here) is as a blanket invalid as well (in my opinion, ofc).
In the true spirit of the old web, you can adjust the default font in your browser's preferences to any font you prefer and the page respects it, as it doesn't specify what font to use at all.