• 0 Posts
  • 39 Comments
Joined 4 months ago
cake
Cake day: December 4th, 2024

help-circle
rss

  • That’s a view from the perspective of utility, yeah. The downvotes here are likely also from a ethics standpoint, since most LLMs currently trained are doing so by using other peoples’ work without permission, all while using large amounts of water for cooling, and energy from our mostly coal-powered grid. This is also not mentioning the physical and emotional labor that many untrained workers are required to do when sifting through the datasets of these LLMs, removing unsavory data for extremely low wages.

    A smaller, more specialized LLM could likely perform this same functionality with a much less training, on a more exclusive data set (probably only a couple of terabytes at its largest I’d wager), and would likely be small enough to run on most users’ computers after training. That’d be the more ethical version of this use case.


  • I think it’s important to also use the more specific term here: LLM. We’ve been creating AI automation for years for ourselves, the difference now is that software vendors are adding LLMs to the mix now.

    I’ve hear this argument before in other instances. Ghidra, for example, just had an LLM pipeline rigged up by LaurieWired to take care of the more tedious process of renaming various functions during reverse engineering. It’s not the end of the analysis process during reverse engineering, it just takes out a large amount of busy work. I don’t know about the use-case you described but it sounds similar. It also seems feasible that you could train an AI system on your own system (given you have enough reversed engineered programs) and then run it locally to do this kind of work, which is a far cry from the disturbingly large LLMs that are guzzling massive amounts of data and energy to learn and run.

    EDIT: To be clear, because LaurieWired’s pipeline still relies on normal LLMs which are unethically trained, her pipeline using it is also unethical. It has the potential to be ethical, but currently is unethical.



  • It feels like your making a semantic argument to downplay how tight grip these softwares have on their respective industry markets.

    If you are only ever considered for a job if you have Photoshop experience, and that is the normal treatment across the majority of the industry, that’s a standard that the industry is now holding you to - an industry standard if you will. It does not need to be backed by a governing body for it to still count.

    My current understanding is that you will not get a job at a major CGI company by knowing Blender (though the film ‘Flow’ shows that might change going forward). You have to know softwares like Houdini, 3ds Max, Maya, etc…, if you want to be treated seriously.


  • That entire solution immediately falls apart when the paradigm is patented by the vendor, who immediately sues any competing software using UI elements even vaguely similar to theirs. This has been going on for decades, and the three things that usually happen are that the competitor either gets bought up, sued out of existence, or has to keep their UI different enough that there is little-to-no bleedover between the userbases (and usually starves to death from too little revenue).


  • There is a practice where software companies will either provide their software to schools and colleges for free or will pay schools and colleges to use their software. This leads to the students using this software, learning that software’s sole paradigm, and essentially forces them to use that software going forward because of how difficult it is to shift to another software with a different paradigm. This is Vendor Lock-In. The vendor locks you into their software.

    This leads to all future workers being trained in that software, so of course businesses opt to use that software instead of retraining the employee in another. This contrasts with the idea of what an ‘industry standard’ is. The name suggests that it’s used in the industry because it’s better than other software, but in reality it’s just standard because of lock-in.

    This is how Windows cornered the operating system market - by partnering with vendors to ship their systems with Windows pre-installed.



  • Something that deteriorates the structural integrity of load-bearing frameworks /s.

    Being serious, it’s another programming languages that is gaining popularity. Others can expand on why it’s good. I’ve never used it myself so I can’t comment in good faith. I also don’t have any experience with Rust-bros so I can’t comment on their code quality. I’ve mostly just been watching amused as they fight with the Linux development community.



  • This is kind of erasing the author with your description of the issue. The reason that apps eventually require CLI to complete tasks is because devs think of CLI first and then produce a stop-gap P&CI over top of it. It is explicitly how devs in the Linux environment operate which creates a gap between CLI and P&CI. If apps were developed with P&CI in mind first, with CLI added after, this would not be a problem - and we know this because of every app developed for both Windows and Linux, which lack these gaps in functionality - or lack CLI entirely.

    Your stance also de-emphasizes the difficulty of learning CLI for the first time. It’s not the most difficult thing ever, but it can be fairly frustrating. It’s not something you want to deal with when just trying to unwind after work on your PC, or while you’re trying to do your job at work. I think it’s pretty reasonable most people don’t want to have to learn yet another paradigm just to do what they’ve already figured out how to with a P&CI.

    Being realistic, of course, this paradigm shift is not going to happen. Linux will continue to be only a small portion of total computers used by end users because of this, and various other reasons it’s found unpalatable.

    I’ve heard that KDE and GNOME, however, are both at a level now where P&CIs are all you really need. I have not tried them myself, though.


  • Obviously I’m talking about the DE packages, not the kernel or CLI base. We are talking about windows users switching to linux-based DEs, which are directly trying to compete with Windows and iOS.

    This is not me having issue with CLIs. I’ve been on Linux for decades. I am pointing out the perspective of those that are frustrated with Linux DEs being blatantly unready for mass-adoption, specifically because they expect layman users to learn CLI. See my previous comment and this comment for more details.


  • I was specifically trying to not sound conspiratorial. I’m pointing out that it’s a matter of having learned a paradigm vs having to learn a new paradigm.

    Devs have already gotten used to CLI and very rarely make full P&CI suites because of it. Even if the original Dev only did CLI for the app and someone came back and made a P&CI for that app, those P&CI interfaces are still fairly barebones. This is both a mix of devs knowing how good CLI can be and because it’s all open source volunteer work.

    Layman users of P&CI-focused DEs actively avoid CLI so they don’t have to learn it. This means that using most Linux apps are something to be avoided for most Windows users, making the OS base mostly unusable for them.

    To be clear, when I am talking about P&CI-focused DEs, like windows and iOS, I mean that if you cannot perform an action with the P&CI, then that action essentially does not exist for the average user. Contrast that with Linux DEs, where it’s quite common to have to directly edit configs or use the CLI to perform various actions.

    As a veteran user, CLI does not bother me. I do understand the frustration of those who want some Linux DEs to become as default as Windows and iOS, because lack of P&CI does damage that effort.

    This is not every app in Linux obvi, but the ones that are best at making sure the P&CI is full-flddged, are the apps that develop for windows and iOS as well as Linux - Blender, LibreOffice, Logseq, Godot, etc. The most common offenders are the utility apps, such as those that handle drivers, sound systems, DE functions, etc.



  • It’s not that they are mad others use CLI, it’s that they’re mad that Linux devs regularly stop creating P&CI features, instead opting for CLI with no P&CI equivalent action.

    It’s kind of obvious why - CLI is already very flexible right out of the box, and it takes much less work to add functionality within CLI rather than creating it for the P&CI.

    At the same time, I understand the P&CI folk’s frustration, since one of biggest obstacles to getting more people on Linux is the lack of P&CI solutions, and the fact that many actions on Linux are explained solely via CLI.

    CLI folks have invested the time to use terminals effectively and view overuse of the P&CI as beneath them, and P&CI folks have no interest in dumping time into learning CLI to do something they could do on Windows with P&CI.



  • To be clear, Neural network systems will not become sentient super intelligences by themselves - another system, or multiple other systems, likely would need to be working in tandem. It’s also entirely likely that Neural Network systems lead to a dead end and it’s actually some other machine learning technology that would actually cause an intelligence explosion.

    It’s inarguable, though, imo, that no one should be taken seriously who says LLMs or other Neural Network technologies would improve the efficiency or reliability of companies or government systems.


  • Could also put up:

    • Massive collections of people are exploited in order to train various AI systems.
    • Machine learning apps that create text or images from prompts are supposed to be supplementary but businesses are actively trying to replace their workers with this software.
    • Machine learning image generation currently has diminishing returns for training as we pump exponentially more content into them.
    • Machine learning text and image generated content self-poisons their generater’s sample pool, greatly diminishing the ability for these systems to learn from real world content.

    There’s actually a much longer list if we expand to talking about other AI systems, like the robot systems we’re currently training to use in automatic warfare. There’s also the angle of these image and text generation systems being used for political manipulation and scams. There’s alot of terrible problems created from this tech.


  • As others have said, find a professional. It can take alot of tries before you find the right person, but it’s extremely helpful when you find the right person.

    As someone with ADHD I also get anxiety with changes in my day-to-day events. My coping mechanism for a while has been coming up with practical contingency plans. That makes it so I at least have an idea of what to do and at what point there is nothing left to do. It’s helped me get through many situations.

    As for your future and social problems, those likely need some personal analysis and personal change (professionals are meant to help with this). A lack of future is often not an actual lack, it’s usually a personal failure at seeing other potentials, seeing a new path to follow. It’s sometimes called learned helplessness and can be hard to deal with alone. Becoming antisocial (not wanting any human interaction) is also usually a difficult thing and is usually caused by a personal neuroticism. But we need purpose as humans, and we also need comradery quite often.

    Thinking of yourself as a collection of habits can be helpful for this. You should be asking yourself what exactly makes you upset about about other people, and try to relate it to something about yourself.

    You can’t change other people, but you can change how you react to other people. Quite often that requires a shift of perspective that acknowledges that you are a biased viewer enterpretting a limited view. Instead of “people talking about themselves are annoying” for example, “I am bothered by people when they talk about themselves” can be more useful. That way, you are talking about the emotional response you have to others rather than the perceived traits of others - your lense is now focusing on you instead of on others.

    That’s all I’ve got. The path to being content is difficult, and I wish you luck.


  • Hand make your gear if you can. K.I.S.S.

    You can make simple gear that is more durable than mass-manufactured clothes, and that’s directly repairable by your own hands. It might not be as pretty but it can serve your needs and will be better for you in most ways. You’d also be sticking it to the system by doing so, especially if you’re doing it with second-hand clothes.