What was your last RTFM adventure? Tinker this, read that, make something smoother! Or explodier.
As for me, I wanted to see how many videos I could run at once. (Answer: 60 frames per second or 60 frames per second?)
With my sights on GPUizing some ethically sourced motion pictures, I RTFW, graphed, and slapped on environment variables and flags like Lego bricks. I got the Intel VAAPI thingamabob to jaunt by (and found that it butterized my mpv videos)
$ pacman -S blahblahblahblahblahtfm
$ mpv --show-profile=fast
Profile fast:
scale=bilinear
dscale=bilinear
dither=no
correct-downscaling=no
linear-downscaling=no
sigmoid-upscaling=no
hdr-compute-peak=no
allow-delayed-peak-detect=yes
$ mpv --hwdec=auto --profile=fast graphwar-god-4KEDIT.mp4
# fucking silk
But there was no pleasure without pain: Mr. Maxwell F. N. 940MX (the N stands for Nvidia) played hooky. So I employed the longest envvars ever
$ NVD_LOG=1 VDPAU_TRACE=2 VDPAU_NVIDIA_DEBUG=3 NVD_BACKEND=direct NVD_GPU=nvidia LIBVA_DRIVER_NAME=nvidia VDPAU_DRIVER=nvidia prime-run vdpauinfo
GPU at BusId 0x1 doesn't have a supported video decoder
Error creating VDPAU device: 1
# stfu
to try translating Nvidia VDPAU to VAAPI – of course, here I realized I rtfmed backwards and should’ve tried to use just VDPAU instead. So I did.
Juice was still not acquired.
Finally, after a voracious DuckDuckGoing (quacking?), I was then blessed with the freeing knowledge that even though post-Kepler is supposed to support H264, Nvidia is full of lies…
______
< fudj >
------
\ ‘^----^‘
\ (◕(‘人‘)◕)
( 8 ) ô
( 8 )_______( )
( 8 8 )
(_________________)
|| ||
(|| (||
and then right before posting this, gut feeling: I can’t read.
$ lspci | grep -i nvidia
... NVIDIA Corporation GM108M [GeForce 940MX] (rev a2)
# ArchWiki says that GM108 isn't supported.
# Facepalm
SO. What was your last RTFM adventure?
for me it usually goes
me: reads the manual, fails, then asks for help
person helping: heres a canned tip
me: didnt help
person helping: you should read the manual
me: no i am beyond that, i need help with my problem
person helping: oh turns out i couldnt actually help you, anyways go try somewhere else
And if it was an issue on github:
Closed: “couldn’t reproduce” 10 seconds after that last comment.
Not my last, but after using
killall
in Linux, I tried it on hpux, only to discover and later confirm in the man page that on hpux it doesn’t take any arguments, it just kills every process.Oh, man! This happened to me in production, working on a server that did the invoicing for a large company. Mind you, I was assisted by a senior amin who assured me killall works on hpux. It worked “better” than expected.
And probably sometime the guy who executed the command…
I was trying to write a custom Strategy for an objectMapper in Java. Foolishly decided to ask ChatGPT about it and got instructions which suggested an implementation that was the inverse of how Strategies actually work. Stuck for an afternoon.
Then in the evening I read the docs and put it together in half an hour from scratch. Lesson learned about the stochastic parrots.
Hah, stochastic parrots.
Makes me wonder. Every laziness I’ve had with the vector guessers, I’ve seen an exact counterweight.
matrix scrombulator webpage (2007-2014) Here’s random code. Pray it works Free ancient code at man 3 getifaddrs
.How does this API work? (when the API has below 10 million sample lines of code) Incredibly concise documentation worth spending 2 minutes on or HTML text without margin lines worth spending 20 minutes on Maybe this is what’s causing your bug. Investigate a, b, and c. Conclusion sentence. footnote in ArchWiki / archetypal 2009 StackOverflow duplicate Here’s the main idea of X… you need to take into account a combination of facets to ensure safety. Angry blog post about X that’s oddly technical (now you see both sides) One, you can invoke more often (throw ChatGPT configs against the wall until it doesn’t error); the other you can invoke more deeply. So I can’t help but wonder – when we cancel out all the terms – if the timesaving sum is positive or negative. ¯\_(ツ)_/¯
I learned that rpm-ostree cant remove packages from an OCI image, ever.
So even if I have a blue-build process for example in secureblue removing Firefox, it is just removed on my side, locally. Thats why I cant reinstall it.
Instead of learning about all the Flatpak packaging conventions, I just translated the docs!
I have been burnt too many times by vendor incompatibility at work to not read the manuals before deploying something.
I tried to install ROCM on my machine to run Stable Diffusion. So far I’ve managed to bork my system to the point of having to reinstall.
I’d recommend using ROCM through a Distrobox container, personally I use this Distrobox container file and it has suited all of my needs with Stable Diffusion so far.
That is, if you’re still interested in it - I could totally understand writing it off after what happened 😅
Thanks, I’ll look into that.
I’ve installed ROCM before reading that my AMD GPU does not support it
Most consumer ones don’t, but for a lot of them I’ve heard there’s a hack that will work by identifying it as a similar supported one.
I got it to run before but then the 22 upgrade borked my system. I don’t know if it was because of ROCM or Pipewire. Then i reinstalled Mint and tried to install ROCM again, but that borked it again. So let’s see if it works this time.
Hardware related on a Linux home built NAS.
My mobo has 2 nvme ports and supports 10th and 11th gen intel cpu. I have a 10th gen i5 and 2 nvme ssd for cache.
The biggest 512Gb ssd is on the front (normal) side of the mobo, under a heatsink. The smaller 128Gb is under the mobo, inaccessible once fixed onto the case.
In bios and in OS I can’t see the 512 cache drive, only the 128. Quick RTFM on the motherboard manual states: “Front nvme slot only works with 11th gen cpu”.
FFS 🤦♂️
The server is fully built in a hard to fit everything ITX case.
Guess who is having only 128Gb cache instead of disassembling everything ?
Quacking, I like it!
One of the largest projects under my GitHub account is an attempt at a proof-based programming language that I had to abandon because I underestimated the theoretical work involved, did not RTFM enough and months into it realized the entire thing was unsound af.
I’m very intrigued. Could you please explain it? Even if you abandoned it, you still learned valuable knowledge.
Yes, that’s true and a better way to look at it, thanks!
Well, I was amazed by proof systems like Coq or Isabelle, that let one formally verify the correctness of their code. I learnt Coq and coded a few toy projects with it, but doing so felt pretty cumbersome. I looked at other options but none of them had a really good workflow.
So, I attempted to design one from scratch. I tried to understand Coq’s mathematical foundation and reimplement it into a simpler language with more familiar syntax and a native compiler frontend. But I rushed through it and turns out I had barely scratched the surface of the theory. Not just regarding the proof system, but also with language design in general.
I did learn a lot though. Since then I’ve been reading more about proof systems and language design in my spare time, and I’ve collected quite the stack of notes and drafts. Recently I’ve begun coding a way more polished version of that project, so on to round two I guess!
Round two, hell yeah.
The aesthetica of a stack of notes, born from a “dead end”, is secretly an odd motivator. You look back and see
Here is the breadth of what we did wrong.
and then beyond you, the effort lays itself out in a pretty trusswork.
_or_maybe_i_just_think_well-used_notebooks_are_pretty
Haha yeah, absolutely! Might be too messy to consider it “well used” though… But it does motivate me, seeing all the signs I put there and imagining one day I will conquer that mountain. Maybe not even on the second attempt, but definitely one day.
No mention about the limited API in the Nessus Professional documentation.
Waste of time trying to test API, debbuging why some method doesn’t work.Never used Kubernetes before, but really wanted to get into it with this new project. Project already has docker-compose. Found a converter to Kubernetes. Ran it and it mostly worked, but I had to dive into a week of reading the documentation and testing to get the rest of the way there.
The depth of a dive is always delightful! Does K8s have a solid use-case for the project or did you just sK8 for fun?
Couldn’t get the geolocation work for weeks in openSUSE. I, supposedly, read the manual and checked everywhere and even asked in the opensuse forum, since the timing was perfect with Mozilla shutting down MLS, and it probably was a reason, but also any other alternative didn’t work. Some days ago I decided to RTFM of geoclue again, only to find out that I could just “hardcode” my location in an
/etc/geolocation
file >:(Wanted to see if I could do anything exciting with the new Satisfactory dedicated server API. There’s no documentation of it anywhere online, but there’s a random markdown file documenting it in the installation directory. Got it working but turns out it can’t do much. Oh well
Trying to setup dnscrypt-proxy on my personal laptop. I tend to think I things are more complicated, so I went down the rabbit hole searching for all manner of issues and setup guides. It’s not hard… RTFM
For me, it was getting a handle on rsync for a better method of updating backup drives. I was tired of pushing incremental changes manually, but I decided to do a bit of extra reading before making the leap. Learning about the -n option for testing prior to a sync has saved me more headaches than I’d care to enumerate. There’s a big difference between changing a handful of files and copying several TB of files into the wrong subfolder!
Oh I love the “walk me through what I’m about to do” concept. Dry runs should be more common – especially in shell scripts…
The world would be a better place if every
install.sh
had a--help
, some niceprintf
’s saying “Moving this here” / “Overwrite? [Y/N]”, and perhaps even a shoehorned-inset -x
.Hope your r/w wasn’t eaten up by the subfolder incident (that I presume happened) :P
I’m lucky I manually ran a few jobs before I started using rsync in scripts. When I didn’t think things through, I saw the output in real-time. After that, I got very careful about testing any scripts and accounting for minor changes in setup.