@arcanicanis@were.social
@arcanicanis@were.social avatar

arcanicanis

@arcanicanis@were.social

Just a profusely verbose fediverse interloper

This profile is from a federated server and may be incomplete. Browse more on the original instance.

arcanicanis,
@arcanicanis@were.social avatar

Really it was just the Palm Pre’s hardware that held it back

I'd argue it was just purely the rat race of "the most apps" where all other vendors lost out to iOS and Android. It didn't matter that it had all the essential applications, as well as having plenty of the fads/gimmicks of the time (Angry Birds, Pandora, et al) natively supported, it still boiled down to the normie logic of "BUT I WANNA RUN ALL THE APPS!" There was also the whole culture sentiment of "if you make a niche Android/iOS app and sell it, you could become like a millionaire overnight" that people seemed to clamor around, as another gimmick economy (as The Next Big Thing(TM) after the dot-com bubble).

There was also the tidbit that most of the operating system (outside of Linux kernel and standard utils) was proprietary. It's only when it was pretty much given up on, that they started to open it up as various open source projects. Comparatively Android was far more open source than webOS at the time.

perplestuff, to random
@perplestuff@fedi.furfag.biz avatar
arcanicanis,
@arcanicanis@were.social avatar

I warned you against registering on a mega-server, meanwhile how many interruptions have you had on any of my servers? You're just using the same normie logic as registering on mastodon.social, and then concluding everything else is crap.

AlgorithmWolf, to random

Fediverse is possibly the most unreliable, unresilient, and untrustworthy piece of distributed social software I have seen in my life.

In contrast, the "horrible" badly encrypted and centralised Telegram is practically impossible to censor by ISPs or governments, and no effective attacks against their cryptography have been shown that actually allow a MITM to read your Telegram messages.

Meanwhile Fedi allows your server maintainer to read all your DMs.

Cool software.

RE: https://tech.lgbt/users/wakame/statuses/112150443411897850

arcanicanis,
@arcanicanis@were.social avatar

I believe there's just some tunneling built into the Telegram client, and/or some heavy DNS caching (or hardcoded fallback IPs) in the client. Thus any DNS-level blocks can be easily circumvented.

There's also the factor of: as a government, usually you can't just selectively keep people you don't like from accessing it, nor selectively keep people from accessing content you don't want them to. It's a boolean of: block [nearly] everyone indiscriminately, or block nobody. If suddenly a government goes to all extremes to properly cut access to Telegram, then usually the layperson starts to wake up a little that their government's gone pretty unhinged.

If there's active threats and motivations to make things more censorship-resistant, then things can definitely be hardened more in that direction. Just similarly as there's efforts like Nostr and such, which could be rolled into a polyglot ActivityPub server or similar, or the efforts with nomadic identity/data, etc.

Then conversely, if something is "too" censorship-resistant, then there's the crowd that complains about all the content they don't like.

I won't touch the crypto/privacy counterpoints, since others have already detailed out that subject. It is mildly ironic, because MTProto was originally mocked as the example of "why you shouldn't roll your own crypto" for the mistakes they made.

Also, what software have you written or contributed to? I only ask this when there's the pattern of rants of "omg everything is absolute trash, it's all utter garbage, wtf is this shit", and I don't see any history of someone bothering to fix any existing toolings themself, nor build anything better, but bicker as if they're someone that knows better.

arcanicanis,
@arcanicanis@were.social avatar

but I do not write code under my Ulven fursona

Alright, that's a fair counterpoint.

my professional competence as a software developer is not relevant to

I'm not saying it's a matter of competence, it's more the: "all this shit sucks, I'm able-bodied to help fix it, but don't". There's already plenty enough of people that just open 'issue' reports on Github (and alike), and complain about how they want the software to work, and if not, that it's crap otherwise. While typically the core devs are doing it as a free passion project, and everyone else is just watching, and snapping their fingers at the devs, about "how it should be done instead".

I get some of the same complaints such as with XMPP. Where you've got able-bodied developers whining that they don't like the cosmetics, or menu structure of an XMPP client, while it's completely open source software that they have the freedom to help contribute back to. But much of 'open source advocacy' in recent years seems to just be people only caring about gratis ("I don't have to pay money for it") and nothing else.

arcanicanis,
@arcanicanis@were.social avatar

the main XMPP maintainers have completely given up on mainstream XMPP adoption

Where? There's alternate efforts like rebranding and trying to provide a "WhatsApp-style" experience, such as with Snikket, last I'm aware of. Much of the apathy isn't about winning people over to XMPP, it's the: no matter what you do, no matter how much you pad and polish the experience, people will so trivially be herded to the next predatory VC startup that's trying to farm up another userbase to later sell off. It's just the same with all the people that herded to fedi from Twitter, were just as quick to herd from fedi to Bluesky, and then whatever else next. It's all crowdfollowing.

Meanwhile, there's absolutely plenty of large-scale use of XMPP in backend infrastructural contexts. It's just there's no "money" in the market of making XMPP clients that allow anyone to register/move to any service. You can't farm a captive userbase on a federated platform, so it doesn't get VC money. Thus the bulk of XMPP client dev is down to the devs doing it as a passion project, with some NLnet grants from time-to-time.

I don't know what red tape there is for XEPs, as I've seen a handful of Experimental drafts published with an assigned XEP number, by people that I don't see involved with any client/server project. After I get done with the bulk of my ActivityPub projects, I'll likely be trying to push for work/implementation on some of the sticker XEPs. I can definitely contribute to Movim, maybe Gajim, etc.

arcanicanis,
@arcanicanis@were.social avatar

Really, read my article. XMPP effectively doesn't really have any stable, recommended XEPs for E2EE implementation. They're all either deprecated, or experimental

I thought I argued this before with you, but I guess it was someone else: https://were.social/notice/AerAv86jqk3DUjUWm0

And I have read and commented on the blog back here: https://were.social/notice/AdxSLrqbxDLtXlqFkm

Holy shit yeah, like, instant messaging is not exactly a difficult computer science problem. You'd think we'd have gotten the one application that can do everything perfectly and everyone loves?

But that's the problem: this isn't a technical reason. It's a user problem. You can cultivate a thriving community on IRC even, it doesn't entirely matter about protocol. It's people making up excuses, and instead using the worse of the options, and crowdfollowing to the next startup platform anyway.

We have mountains upon mountains of open source software, many of which tries to give users extensive freedom and control, but the users just keep falling for the next scam after another, always falling for the bait, always repeating this damn cycle. You could have a full open source rewrite of Discord, and then they'll just herd to the next shit startup.

arcanicanis, to random
@arcanicanis@were.social avatar

I guess I successfully created a did:plc and have it published to (sorta) Bluesky's backend did:plc registry: https://plc.directory/did:plc:s2m7kbq2unki7rager5aw6sw/log

Instead of endorsing any sort of a ATProto PDS or anything, I instead have it pointing to my ActivityPub (and other) identifiers in varying forms.

I'm probably the only [non-employee] user (or at least: one of very few) on Bluesky's infrastructure that has full custody and control over their own private keys for their did:plc identity, and yet I don't even have a Bluesky account. Unless I'm just uninformed of something buried somewhere allowing you to export at least one of your rotationKeys (not the signingKey, which is just for signing posts, etc). Because without that, you don't really control your identity at all, only Bluesky exclusively does.

Meanwhile, in this endeavor, I "only" had to:

  • Write a DAG-CBOR and CIDv1 encoder
  • Write a Multibase and Multikey encoder and decoder
  • Write a base58btc encoder/decoder
  • Write a base32 encoder
  • Write functions to compress and decompress a secp256k1 public key (involves crypto maths, for decompression)
  • Write some very adhoc ASN.1 DER encoding/decoding functions (just to encode a raw secp256k1 public key into PEM encoding, to feed to OpenSSL; and then extract the r and s values from the outputted signature from OpenSSL)
  • Write a function to generate a did:plc identifier, from the genesis operation
  • Write a lot of test code

With how scarcely some topics are documented, and how scattered many tidbits of info is: I swear some of this is almost intentionally a trap to sell consultancy.

arcanicanis,
@arcanicanis@were.social avatar

It isn't necessarily documentation difficulties with did:plc itself, just maybe a lack of some prerequisite formats/encodings (and their justifications).

For example, with DIDs and Object Integrity Proofs in ActivityPub, I had described how/why each part fits together (all the way into Multibase, Multikey, JCS, etc), some history, etc in a primer on the concept: https://arcanican.is/primer/ap-decentralization.php (sorry for lack of responsive design, I need to finish up my CSS work on my unusual personal website). I'm sure there's probably some gaps in my overview.

Nonetheless, the details of DAG-CBOR and especially Multikey seem very scattered and difficult to find, if people don't know where to look. Also, previously from the ATProto docs, I can't remember if I could even find any section that mentioned where the did:plc spec was, or the mention of the plc.directory, unless I just overlooked it back then. I do see the "Identity" section does point to a GitHub repo on did:plc.

Part of why I've implemented much of it from scratch is partly to evaluate the totality of everything required together (as well as trust factors, as there are a lot of bad libraries out there that people put blind faith into), as technical solutions should be evaluated by the whole of all it's dependencies.

For example, in the proposal for Object Integrity Proofs in ActivityPub, just JCS (JSON Canonicalization Scheme; RFC 8785) is needed to canonicalize JSON-like data into deterministic format for sign/verify, versus bringing in a whole system of compacted binary JSON-like serialization with additional semantic tagging (to denote CIDs), mainly just for the sign/verify use. The bigger the stack is, the more burden it creates with porting things in new/uncommon languages.

Generally I hold the belief that something is only a standard when people are able to make their own implementation from the text of a standard, and not just importing a vendor's library instead. It's part of what distinguishes the crowd of XMPP vs Matrix. You typically have the kind of people that have actually tried to implement a standard themself, versus those that only participate as an end user (just importing a vendor lib, or parading about a cosmetic UI/UX). I don't believe something is a standard until there's at least two complete independent implementations (by separate development groups---e.g., not just some of the same Matrix dev team making Dendrite too).

The motivations with my projects are primarily self-contained, minimal PHP implementations that people can just toss on any conventional hosting provider (such as a typical LAMP stack), and be able to participate in the social web with very little technical knowledge (just knowing how to get hosting, a domain, and extracting some files). If something can work in constrained less-privileged environments, then it allows more opportunity for people to self-host.

In ActivityPub, the bare minimum needed for someone to participate is just a generic webserver that serves objects (that can just be plain files) served with a application/activity+json media type, and just some dynamic code at their inbox endpoint to handle inward activity. WebFinger could even be accomplished as a basic URL rewrite. Even the end user could handle delivery entirely client-side, as long as they have custody of their keys (and as long as the recipient server permits it with CORS). The rest can be incrementally built up as icing on the cake (access control, timeline generation, etc).

With periodically orbiting/skimming the ATProto docs, I don't have a clear image of what the bare minimum implementation profile is with ATProto. I don't know if I'm even able to experiment, or if I'm supposed to wait until federation is opened up (or if Bluesky even will open up federation with obscure single-user instances, and if it has to go through an approval process, etc). I don't know if someone can build their own PDS (and other components, if needed) and just start directly following anyone, without requiring any intermediate infra inbetween just to start getting a chronological timeline feed.

arcanicanis, to random
@arcanicanis@were.social avatar

Would it be worth anything for me to spend my day writing a light dissertation between ATProto, ActivityPub, Nostr (very lightly, I still need to dig more), DIDs, etc? (and having to re-read much of the specs again)

If so, expect that it'd be at least as long as (if not more) than: https://arcanican.is/primer/ap-decentralization.php

and just to declare: I've only written a full ActivityPub server implementation, meanwhile I have not written an ATProto server nor anything Nostr yet. I believe I have a fairly complete understanding how it ATProto comes together in it's components, compared to a lot of other people that seem to be commenting on it.

PurpCat, to random
@PurpCat@clubcyberia.co avatar

can't even get a job at a college without bullshit

arcanicanis,
@arcanicanis@were.social avatar

and the humoring irony is that just by working as an independent contractor, you skip all of that. I’ve done work for colleges, universities, banks without any degrees or certifications (at the time), because just being a contractor skips HR entirely.

icedquinn, to random
@icedquinn@blob.cat avatar

> http://game.notch.net/drowning/#
:blobcatgrimacing: well that's dark

arcanicanis,
@arcanicanis@were.social avatar

Guess I can’t advance beyond a teenager by focusing on working, learning, and friends (sometimes) somehow; and I guess trying to advance in career apparently makes you dumber.

arcanicanis,
@arcanicanis@were.social avatar

I’m almost itched by this now: the lack of time and health as being finite values, unless they’re just hidden. But nonetheless, each action should decrement a theoretical value of time. Meanwhile other poor choices could decrement health. Ergo, someone could die younger than the ‘finite amount of time’ (aka max lifetime) by poor health choices (stress, emptiness, etc).

arcanicanis, to random
@arcanicanis@were.social avatar

That’s just unusually amusing: https://wordpress.org/playground/

Essentially building an in-browser equivalent of a LAMP stack, providing a full self-contained development environment of WordPress, with SQLite backend, Wasm build of PHP, and an internal JavaScript-based webserver, all entirely client-side.

It obviously doesn’t have any sense for production use, nor anything to do with self-hosting, but an amusing “just because we can” demo.

sjw, to random
@sjw@bae.st avatar

>What if we just give the tards a bunch of weed so they don't sperg out?

https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/37937428/
Screenshot_20240103-143220.png

arcanicanis,
@arcanicanis@were.social avatar

There is developing evidence of medicinal CBD/THC improving psychiatric and behavioural presentations in general. In particular, there is emergent proof in certain key areas of influence of medicinal CBD/THC positively supporting challenging behaviour, for example in children with neurodevelopmental disorders.

Forget Ritalin, let’s try trialing weed on kids for the lulz, since experimenting with drugs on kids in their formative years worked out so well..

arcanicanis,
@arcanicanis@were.social avatar

I assume there’s cases where things could be constructive, but I reasonably guess that if it becomes more commonplace, that it’d just end up as a “catch-all solution” just as Ritalin was the catch-all for any “problematic children” (and I know a lot of people that complain of Ritalin screwing up some of their younger years).

coolboymew, to random
@coolboymew@shitposter.club avatar

I deleted my facebook. It was my source of pics and memes for forever but it's enough, it's time to let it go and move on to smaller less predatory websites

Any recommendations of good hubs like these, sites that aren't massively cancerous or censored to hell? Back in the days all those nice facebook pages and groups would be a personal hosted blog or a small forum, but can't have that anymore

arcanicanis,
@arcanicanis@were.social avatar

On a sidenote: did you curiously try requesting a data export of everything that they’ve cataloged on you before deletion?

arcanicanis,
@arcanicanis@were.social avatar

I just do it for amusement, I did it when I closed my Facebook account like 6+ years ago, whereas it included things like: anything deleted, any like/dislike, friend/unfriend, any RSVP action on any event and any check-in, facial recognition data, etc.

Just recently deleted my Twitch account, while doing an export beforehand and: Twitch is pretty much an advertising company marketed as a streaming site. Pretty much every action is recorded, and with a [datetime, IP address, GeoIP, ASN] tuple on nearly single action, even logging things like EACH INDIVIDUAL MINUTE WATCHED of a VOD or stream, etc. Every pageview, and referrer URL, down to looking between the profile page, schedule page, and such of an account, etc.

Screenshot of data export with spreadsheets named: unfollow, unfollow_game, follow, follow_game, bits_cheered, bits_acquired, subscriptions, update_email, login_rename, update_password, two_factor_enable, two_factor_disabled, oauth_deauthorize, oauth_authorize, logout, login, connections_connect, auth_success_two_factor, auth_failure_two_factor
Screenshot of spreadsheet document 'minute_watched' detailing each logged minute of a video stream 'resoniteapp', with columns of datetime, RTMP client, user agent, and more

arcanicanis,
@arcanicanis@were.social avatar

Yes, although I'd assume some of that to be summarized data, rather than a separate record for each individual minute, and for context, the columns of just this one dataset are:

time, broadcaster_software, browser, buffer_empty_count, channel, channel_id, cluster, current_bitrate, device_id, domain, flash_version, buffer_size, live, minutes_logged, os, partner, referrer_domain, referrer_host, referrer_url, stream_format, url, ip, video_buffer_size, video_height, video_width, country, city, region, hls_target_duration, asn, quality, host_channel, login, chromecast_sender, client_time, hls_latency_encoder, hls_latency_broadcaster, seconds_offset, subscriber_web, viewability_percent_viewable, viewability_client_height, viewability_client_width, percent_width, device_diagonal, app_version, backend, platform, content_mode, current_fps, app_fullscreen, app_window_width, app_window_height, chat_visible, vod_id, vod_type, vod_format, ui_version, vod_timestamp, muted, volume, mobile_connection_type, playback_mode, captions_enabled, quality_change_count, player_size_mode, is_https, transport_segments, turbo, user_id, backend_version, time_utc, player, staff, game, host_channel_id, medium, content, community_name, community_id, orientation, collection_id, vod_cdn_origin, vod_cdn_region, transcoder_type, autoplayed, language, chat_visibility_status, language_short, referrer, client_app, dropped_frames, rendered_frames, decoded_frames, content_id, customer_id, broadcaster_software_long, device_model, device_os_version, device_manufacturer, low_latency, browser_family, browser_version, os_name, os_version, hidden, auto_muted, impression_id, item_position, row_name, row_position, item_tracking_id, is_fallback_player, latency_mode_toggle, client_build_id, squad_stream_id, squad_stream_presentation_id, tag_set_long, tag_filter_set_long, tag_set, tag_filter_set, encrypted, clock_drift, low_latency_forced, vid_display_width, vid_display_height, estimated_bandwidth, chromecast_sender_long, average_bitrate, collapse_left, collapse_right, multistream_id, multistream_presentation_id, core_version, user_agent, channel_restriction_status, viewer_exemption_status, channel_restriction_type, viewer_exemption_reason, previous_tracking_id, charging, battery_percent, distinct_id, frame_variance, client_time_utc, is_mod, hls_latency_ingest, mod_view, origin_dc, location, consent_comscore_ok, protocol, video_late_count, video_late_duration, video_skip_count, video_skip_duration, airplay_active, thermal_state, sink_buffer_size, player_state, ptb, raid_id, player_framework, player_framework_version, in_background

arcanicanis,
@arcanicanis@were.social avatar

Yes, not saying that broad data warehousing “can’t” be done, but the question of the relevance of it. Individualized minutes watched, at a specific time, [general] location, device screen orientation, etc and so on isn’t going to make that much of a difference to warehouse perpetually (or perhaps later summarize, but not the case in this situation) of something over 4 months ago when payouts have already been tabulated.

I can understand the scope for some degree of application telemetry for feedback, but the resolution of it and expansive timeframe of it feels like nothing more than apparatus to collect and sell data in a surveillance state.

And all of it bloats the operational expenses of a platform, unless it solely is propped up for the intent of data wholesale or ‘information sharing programs’, or just typical absurd VC startup spending.

PurpCat, to random
@PurpCat@clubcyberia.co avatar

BREAKING FEDIVERSE NEWS: FireFish is KILL after furry Kainoa/ThatOneCalculator in typical furry fashion, all but abandoned the project. Frustrated developers are starting a new project called "Catodon" due to stagnation and a dissatisfaction with Kainoa as a project leader.

Essentially you'll have to upgrade your jank FireFish instance to Catodon if you want something newer.
https://codeberg.org/catodon/catodon
https://catodon.social/notes/9nvp68a5a10zrdi2

arcanicanis,
@arcanicanis@were.social avatar

Plus being that they are Japanese they are less likely to insert politics into their projects.

I think that’s likely going to be less of a case over time, as it feels like anyone across any demographic or nation, roughly 25 years old or younger, seem to fall into this pattern, if they grew up primarily on the internet, where much of the mass hysteria online has been their environment of ‘normalcy’.

There’s not much opportunity for passed down customs and culture, if they’re Terminally Online and/or hikikomori, of which many specialist developers fall into.

sjw, to random
@sjw@bae.st avatar

Testing in a webm container
sbfhrEhqAS1IfA.webm

sbfhrEhqAS1IfA.webm

arcanicanis,
@arcanicanis@were.social avatar

What I understand of film grain synthesis is that it’s actually sampling the frame-by-frame appearance and disappearance of film grain spots, the general size and pattern of it, filtering it out, and then encoding that output as if it were source video instead; and then trying to recreate the film grain appearance on-the-fly at playback, which can actually help obscure some underlying encoding artefacts, and also prevent it from appearing a little uncanny or ‘blurry’ if film grain was left out entirely.

In your sample, it seems like the film grain is very faint and fine, so there’s probably not a great amount to be gained from it. Meanwhile re-encoding some of the older X-Files with grain synthesis can have some pretty big filesize gains, since it’s usually heavier and larger film grain.

10-bit definitely helps rid of banding quite a bit, even if the source is 8-bit, because I believe it has more color depth to work with (rather being more constrained) for compressed representation, and weirdly may often even come out a little smaller, given most of the development focus and optimization seems to be on 10-bit encoding.

There is a bit of blocking noticeable at a few parts, but I assume that’s purely the source video, because it seems to fairly hard to see much blocking artefacts in AV1, unless it’s really potato-quality encoded.

I assume that was encoded with Intel’s SVT-AV1 then? Any specific encoding preset used?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • Hentai
  • doujinshi
  • announcements
  • general
  • All magazines