Hi, I’m just getting started with Docker, so apologies in advance if this seems silly.

I used to self-host multiple services (RSS reader, invoicing software, personal wiki) directly on a VPS using nginx and mariadb. I messed it up recently and am starting again, but this time I took the docker route.

So I’ve set up the invoicing software (InvoiceNinja), and everything is working as I want.

Now that I want to add the other services (ttrss and dokuwiki), should I set up new containers? It feels wasteful.

Instead, if I add additional configs to the existing servers that the InvoiceNinja docker-compose generated (nginx and mysql), I’m worried that an update to Invoiceninja would have a chance of messing up the other setups as well.

It shouldn’t, from my understanding of how docker containers work, but I’m not 100% sure. What would be the best way to proceed?

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    10 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    HTTP Hypertext Transfer Protocol, the Web
    VPS Virtual Private Server (opposed to shared hosting)
    nginx Popular HTTP server

    2 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

    [Thread #436 for this sub, first seen 18th Jan 2024, 10:55] [FAQ] [Full list] [Contact] [Source code]

  • Illecors@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    I would suggest having an nginx as a reverse proxy (I prefer avoiding a container as it’s easier to manage) and the have your services in whatever medium you prefer.

    • mudeth@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Yes, that’s exactly what I’m doing now, I was only unsure about how to map the remaining services - in the same docker containers, or in new ones.

  • N0x0n@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    This how I do it, not saying it’s the best way, but serves me well :).

    For each type of application, 1 docker-compose.yaml. This will have all linked containers in 1 file but all your different applications are seperate !

    Every application in it’s respective folder.

    • home/user/docker/app1/docker-compose.yml
    • home/user/docker/app2/docker-compose.yml
    • home/user/docker/app3/docker-compose.yml

    Everything is behind an application proxy (traefik in my case) and served with self-signed certificate.

    I access all my apps through their domain name on my LAN with wireguard.

    • mudeth@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Yes this is what I want to do. My question is how docker manages shared processes between these apps (for example, if app1 uses mysql and app2 also uses mysql).

      Does it take up the RAM of 2 mysql processes? It seems wasteful if that’s the case, especially since I’m on a low-RAM VPS. I’m getting conflicting answers, so it looks like I’ll have to try it out and see.

      • N0x0n@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Nah, that’s not how it works ! I have over 10 applications and half of them have databases, and that’s the prime objective of containers ! Less resource intensive and easier to deploy on low end machines. If I had to deploy 10 VMs for my 10 applications, my computer would not be able to handle it !

        I have no idea how it works underneath, that’s a more technical question on how container engines work. But if you searx it or ask chatGPT (if you use this kind of tool) i’m sure you will find out how it works :).

    • mudeth@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      That would be ideal, per my understanding of the architecture.

      So will docker then minimize the system footprint for me? If I run two mysql containers, it won’t necessarily take twice the resources of a single mysql container? I’m seeing that the existing mysql process in top is using 15% of my VPS’s RAM, I don’t want to spin up another one if it’s going to scale linearly.

      • bjorney@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        it won’t necessarily take twice the resources of a single mysql container

        It will as far as runtime resources

        You can (and should) just use the one MySQL container for all your applications. Set up a different database/schema for each container

        • mudeth@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          I’m getting conflicting replies, so I’ll try running separate containers (which was the point of going the docker way anyway - to avoid version dependency problems).

          If it doesn’t scale well I may just switch back to non-container hosting.

          • bjorney@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            To elaborate a bit more, there is the MySQL resource usage and the docker overhead. If you run two containers that are the same, the docker overhead will only ding you once, but the actual MySQL process will consume its own CPU and memory inside each container.

            So by running two containers you are going to be using an extra couple hundred MB of RAM (whatever MySQL’s minimum memory footprint is)