SaltStack states

News and Announcements related to GhostBSD
ASX
Posts: 988
Joined: Wed May 06, 2015 12:46 pm

Re: SaltStack states

Post by ASX »

what system is that ? where is located that git repository ? on the EU mirror ?
and yes, please, add me and ericbsd keys. ;)
kraileth
Posts: 312
Joined: Sun Sep 04, 2016 12:30 pm

Re: SaltStack states

Post by kraileth »

ASX wrote:what system is that ? where is located that git repository ? on the EU mirror ?
Yes - or rather: The soon-to-be EU mirror. Right now it's a test system that will have to be rebuilt for production in a clean way when everything is done.
and yes, please, add me and ericbsd keys. ;)
Ok, I'll do that step next. Give me a moment.
kraileth
Posts: 312
Joined: Sun Sep 04, 2016 12:30 pm

Re: SaltStack states

Post by kraileth »

Finally managed to set aside a few hours early in the morning to get back to working on salt! And I made some nice progress, too.

The result is now 60 states which now bootstrap much of what is needed for the salt infrastructure. Here's what's new:

1) I've removed configuring the "latest" repo since I found out that the new iocage won't be available as a binary package (due to it depending on current python36 while FreeBSD's default version is an older one) and thus must be built from ports. So the bootstrapping process now puts the ports tree in place and builds iocage.
2) Since it's no longer time-critical, I decided to do things properly and implement a "repomaster" user on the gitjail that can commit to the gitolite-admin repo. This allows for automated import of admin keys, configuration and such.
3) Admin public keys are put into the control repo as well as the saltmaster's pub key. Also all three git repositories needed to (properly) operate SaltStack are created and configured for read-only access by the saltmaster.

Code: Select all

{% set saltmaster_ip = '10.0.1.1' %}
{% set gitjail_ip = '10.0.1.2' %}

#########################
 # UTF-8 configuration #
#########################

set_login.conf_UTF-8:
  file.blockreplace:
    - name: /etc/login.conf
    - marker_start: ':datasize=unlimited:\'
    - marker_end: ':stacksize=unlimited:\'
    - content: |
        :charset=UTF-8:\
        :setenv=LC_ALL=en_US.UTF-8,LC_COLLATE=en_US.UTF-8,LC_CTYPE=en_US.UTF-8,LC_MESSAGES=en_US.UTF-8,LC_MONETARY=en_US.UTF-8,LC_NUMERIC=en_US.UTF-8,LC_TIME=en_US.UTF-8:\
        :LANG=en_US.UTF-8:\

cap_mkdb:
  cmd.run:
    - name: 'cap_mkdb /etc/login.conf'
    - onchanges:
      - file: set_login.conf_UTF-8

set_env_lang:
  environ.setenv:
    - name: LANG
    - value: en_US.UTF-8
    - update_minion: True

set_env_lc-all:
  environ.setenv:
    - name: LC_ALL
    - value: en_US.UTF-8
    - update_minion: True

#########
 # NAT #
#########

create_lo1_if:
  file.append:
    - name: /etc/rc.conf
    - text:
      - cloned_interfaces="lo1"
      - ifconfig_lo1="inet 10.0.0.254 netmask 255.255.0.0"

  cmd.run:
    - name: ifconfig lo1 create
    - unless: ifconfig -l | grep lo1

basic_pf_nat_rules:
  file.managed:
    - name: /etc/pf.conf
    - contents: |
        ext_if="vtnet0"
        int_if="lo1"
        localnet=$int_if:network
        GITJAIL = "10.0.1.2"
        GITPORT = "220"

        scrub in all fragment reassemble
        set skip on lo0
        set skip on lo1

        #nat for jails
        nat on $ext_if inet from $localnet to any -> ($ext_if)
        rdr pass on $ext_if inet proto tcp to port $GITPORT -> $GITJAIL

load_pf_ko:
  cmd.run:
    - name: kldload pf.ko
    - unless: 'kldstat | grep -v "pf.ko"'

activate_pf:
  file.append:
    - name: /etc/rc.conf
    - text:
      - pf_enable="YES"
      - pflog_enable="YES"

  cmd.run:
    - name: pfctl -e -f /etc/pf.conf
    - unless: 'service pf status | grep "Status: Enabled for"'

########################
 # iocage preparation #
########################

fetch_ports_tree:
  cmd.run:
    - name: 'portsnap fetch extract'
    - creates:
      - /usr/ports/x11-wm
      - /usr/ports/accessibility

ensure_iocage_dependencies_installed:
  pkg.installed:
    - names:
      - python36
      - ca_root_nss

ensure_iocage_installed:
  cmd.run:
    - name: make install clean
    - cwd: /usr/ports/sysutils/py3-iocage/
    - unless: pkg info | grep py36-iocage

jail_fetch_fbsd11:
  cmd.run:
    - name: 'iocage fetch --release 11.0-RELEASE'
    - creates: /iocage/releases/11.0-RELEASE

##################################
 # salt minion template (pt. 1) #
##################################

saltminion_create_jail:
  cmd.run:
    - name: iocage create tag=saltminion ip4_addr="lo1|10.0.0.1/24" -r 11.0-RELEASE
    - unless: test `iocage list | grep saltminion | wc -l` -gt 0 -o `iocage list -t | grep saltminion | wc -l` -gt 0 && true || false

saltminion_ensure_running:
  cmd.run:
    - name: iocage start saltminion
    - unless: test `iocage list | grep saltminion | grep up | wc -l` -gt 0 -o `iocage list -t | grep saltminion | wc -l` -gt 0 && true

saltminion_ensure_pkg:
  cmd.run:
    - name: iocage pkg saltminion "install pkg"
    - env:
      - ASSUME_ALWAYS_YES: 'yes'
    - unless: test -e /iocage/tags/saltminion/root/usr/local/sbin/pkg -o `iocage list -t | grep saltminion | wc -l` -gt 0 && true

saltminion_ensure_salt_installed:
  cmd.run:
    - name: iocage pkg saltminion install py27-salt
    - env:
      - ASSUME_ALWAYS_YES: 'yes'
    - unless:  test -d /iocage/tags/saltminion/root/usr/local/etc/salt -o `iocage list -t | grep saltminion | wc -l` -gt 0 && true

saltminion_ensure_stopped:
  cmd.run:
    - name: iocage stop saltminion
    - unless: test `iocage list | grep saltminion | grep down | wc -l` -gt 0 -o `iocage list -t | grep saltminion | wc -l` -gt 0 && true

saltminion_convert_template:
  cmd.run:
    - name: iocage set template=yes saltminion
    - unless: iocage list -t | grep saltminion

saltminion_minion_config:
  file.managed:
    - name: /iocage/templates/saltminion/root/usr/local/etc/salt/minion
    - contents: 'master: {{ saltmaster_ip }}'
    - unless: grep master_finger /iocage/templates/saltminion/root/usr/local/etc/salt/minion

########################
 # saltmaster (pt. 1) #
########################

saltmaster_create_jail:
  cmd.run:
    - name: iocage create tag=saltmaster ip4_addr="lo1|{{ saltmaster_ip }}/24" -t saltminion
    - unless: iocage list | grep saltmaster

saltmaster_set_hostname:
  cmd.run:
    - name: iocage set host_hostname=saltmaster saltmaster
    - unless: iocage get host_hostname saltmaster | grep saltmaster

saltmaster_master_config:
  file.managed:
    - name: /iocage/tags/saltmaster/root/usr/local/etc/salt/master
    - contents: |
        ipv6: False
        publish_port: 4505
        ret_port: 4506
        pidfile: /var/run/salt-master.pid
        root_dir: /
        pki_dir: /usr/local/etc/salt/pki/master
        cachedir: /var/cache/salt/master
        sockdir: /var/run/salt/master
        verify_env: True
        keep_jobs: 24
        timeout: 5
        loop_interval: 60
        output: nested
        show_timeout: True
        color: True
        job_cache: True
        minion_data_cache: True
        preserve_minion_cache: False
        #####
        open_mode: False
        auto_accept: False
        token_expire: 43200
        file_recv: False
        state_top: top.sls

saltmaster_enable_minion_rc-conf:
  file.append:
    - name: /iocage/tags/saltmaster/root/etc/rc.conf
    - text: |
        salt_master_enable="YES"
        salt_minion_enable="YES"

saltmaster_start_temporarily:
  cmd.run:
    - name: iocage start saltmaster
    - unless: test `iocage list | grep saltmaster | grep up | wc -l` -gt 0 -o `grep master_finger /iocage/templates/saltminion/root/usr/local/etc/salt/minion | wc -l` -gt 0

##################################
 # salt minion template (pt. 2) #
##################################

saltminion_complete_minion_config:
  cmd.run:
    - name: 'echo master_finger: `iocage exec saltmaster "salt-call key.finger --local"` >> /iocage/templates/saltminion/root/usr/local/etc/salt/minion'
    - unless: grep master_finger /iocage/templates/saltminion/root/usr/local/etc/salt/minion

saltminion_fix_minion_config:
  file.replace:
    - name: /iocage/templates/saltminion/root/usr/local/etc/salt/minion
    - pattern: 'local: '
    - repl: ''

########################
 # saltmaster (pt. 2) #
########################

saltmaster_gen_ssh_keys:
  cmd.run:
    - name: iocage exec saltmaster 'ssh-keygen -t ed25519 -N "" -f /root/.ssh/id_ed25519'
    - unless: test -e /iocage/tags/saltmaster/root/root/.ssh/id_ed25519

saltmaster_install_pygit2:
  cmd.run:
    - name: iocage pkg saltmaster install py27-pygit2
    - env:
      - ASSUME_ALWAYS_YES: 'yes'
    - unless:  test -e /iocage/tags/saltmaster/root/usr/local/lib/python2.7/site-packages/_pygit2.so

saltmaster_stop_jail:
  cmd.run:
    - name: iocage stop saltmaster
    - onlyif: iocage list | grep saltmaster | grep up

saltmaster_copy_minion_config:
  file.copy:
    - name: /iocage/tags/saltmaster/root/usr/local/etc/salt/minion
    - source: /iocage/templates/saltminion/root/usr/local/etc/salt/minion
    - force: True
    - unless: grep master_finger /iocage/templates/saltminion/root/usr/local/etc/salt/minion

#############
 # gitjail #
#############

gitjail_create_jail:
  cmd.run:
    - name: iocage create tag=gitjail ip4_addr="lo1|{{ gitjail_ip }}/24" -t saltminion
    - unless: iocage list | grep gitjail

gitjail_set_hostname:
  cmd.run:
    - name: iocage set host_hostname=gitjail gitjail
    - unless: iocage get host_hostname gitjail | grep gitjail

gitjail_set_ssh_port:
  file.replace:
    - name: /iocage/tags/gitjail/root/etc/ssh/sshd_config
    - pattern: '#Port 22'
    - repl: 'Port 220'

gitjail_enable_ssh:
  file.append:
    - name: /iocage/tags/gitjail/root/etc/rc.conf
    - text: sshd_enable="YES"

gitjail_start_jail:
  cmd.run:
    - name: iocage start gitjail
    - unless: iocage list | grep gitjail | grep up

gitjail_install_gitolite:
  cmd.run:
    - name: iocage pkg gitjail install gitolite
    - env:
      - ASSUME_ALWAYS_YES: 'yes'
    - unless:  test -e /iocage/tags/gitjail/root/usr/local/bin/git

gitjail_install_sudo:
  cmd.run:
    - name: iocage pkg gitjail install sudo
    - env:
      - ASSUME_ALWAYS_YES: 'yes'
    - unless:  test -e /iocage/tags/gitjail/root/usr/local/bin/sudo

gitjail_ensure_git_group:
  cmd.run:
    - name: 'iocage exec gitjail pw groupadd -n git -g 9418'
    - unless: grep 9418 /iocage/tags/gitjail/root/etc/group

gitjail_ensure_git_user:
  cmd.run:
    - name: 'iocage exec gitjail "pw useradd -n git -u 9418 -g git -c git -d /var/gitrepos -s /bin/sh -h -"'
    - unless: grep 9418 /iocage/tags/gitjail/root/etc/passwd

gitjail_ensure_repos_dir:
  file.directory:
    - name: /iocage/tags/gitjail/root/var/gitrepos
    - user: 9418
    - group: 9418
    - dir_mode: 755
    - file_mode: 644

gitjail_ensure_repomaster_user:
  cmd.run:
    - name: 'iocage exec gitjail "pw useradd -n repomaster -u 8008 -g git -c repomaster -d /usr/home/repomaster -s /bin/sh -h -"'
    - unless: grep 8008 /iocage/tags/gitjail/root/etc/passwd

gitjail_ensure_home_dir:
  file.directory:
    - name: /iocage/tags/gitjail/root/usr/home
    - user: 0
    - group: 0
    - dir_mode: 755
    - file_mode: 644

gitjail_ensure_repomaster_home_dir:
  file.directory:
    - name: /iocage/tags/gitjail/root/usr/home/repomaster
    - user: 8008
    - group: 9418
    - dir_mode: 755
    - file_mode: 644

gitjail_gen_ssh_keys_repomaster:
  cmd.run:
    - name: iocage exec gitjail 'sudo -u repomaster ssh-keygen -t ed25519 -N "" -f /usr/home/repomaster/.ssh/id_ed25519'
    - unless: test -e /iocage/tags/gitjail/root/usr/home/repomaster/.ssh/id_ed25519

gitjail_copy_repomaster_key:
  file.copy:
    - name: /iocage/tags/gitjail/root/var/gitrepos/repomaster_key.pub
    - source: /iocage/tags/gitjail/root/usr/home/repomaster/.ssh/id_ed25519.pub
    - failhard: True

gitjail_permissions_admin_key:
  file.managed:
    - name: /iocage/tags/gitjail/root/var/gitrepos/repomaster_key.pub
    - user: 9418
    - group: 9418
    - dir_mode: 755
    - file_mode: 644

gitjail_setup_gitolite:
  cmd.run:
    - name: 'iocage exec gitjail "cd /var/gitrepos ; sudo -u git /usr/local/bin/gitolite setup -pk repomaster_key.pub"'
    - creates: /iocage/tags/gitjail/root/var/gitrepos/repositories

gitjail_repomaster_custom_ssh_config:
  file.managed:
    - name: /iocage/tags/gitjail/root/usr/home/repomaster/.ssh/config
    - user: 8008
    - group: 9418
    - mode: 644
    - contents: |
        Host localhost
          StrictHostKeyChecking no
          UserKnownHostsFile=/dev/null

gitjail_repomaster_checkout_admin_repo:
  cmd.run:
    - name: 'iocage exec gitjail "cd /usr/home/repomaster ; sudo -u repomaster git clone ssh://git@localhost:220/gitolite-admin"'
    - creates: /iocage/tags/gitjail/root/usr/home/repomaster/gitolite-admin

gitjail_deploy_repo_user_keys:
  cmd.run:
    - name: "cp ./gitolite_files/*.pub /iocage/tags/gitjail/root/usr/home/repomaster/gitolite-admin/keydir && touch /iocage/tags/gitjail/root/usr/home/repomaster/sentinel"
    - unless: test `ls /iocage/tags/gitjail/root/usr/home/repomaster/gitolite-admin/keydir | wc -l` -gt 1

gitjail_deploy_saltmaster_key:
  file.copy:
    - name: /iocage/tags/gitjail/root/usr/home/repomaster/gitolite-admin/keydir/saltmaster.pub
    - source: /iocage/tags/saltmaster/root/root/.ssh/id_ed25519.pub

gitjail_deploy_repo_config1:
  file.copy:
    - name: /iocage/tags/gitjail/root/usr/home/repomaster/gitolite1.conf
    - source: ./gitolite_files/gitolite.conf
    - onlyif: test -e /iocage/tags/gitjail/root/usr/home/repomaster/sentinel

gitjail_deploy_repo_config2:
  file.copy:
    - name: /iocage/tags/gitjail/root/usr/home/repomaster/gitolite2.conf
    - source: ./gitolite_files/gitolite.conf
    - onlyif: test -e /iocage/tags/gitjail/root/usr/home/repomaster/sentinel

gitjail_config_add_salt_repos:
  file.append:
    - name: /iocage/tags/gitjail/root/usr/home/repomaster/gitolite2.conf
    - text:
      - repo salt_states
      -     R       =  saltmaster
      - repo salt_files
      -     R       =  saltmaster
      - repo salt_pillar
      -     R       =  saltmaster

gitjail_configure_git_repo:
  file.append:
    - name: /iocage/tags/gitjail/root/usr/home/repomaster/gitolite-admin/.git/config
    - text: |
        [user]
                name = repomaster
                email = repomaster@gitjail

gitjail_create_addcommitpush_script:
  file.managed:
    - name: /iocage/tags/gitjail/root/usr/home/repomaster/gitolite-admin/addcommitpush.sh
    - contents:
      - git add keydir/*
      - git commit -m "Add user keys for gitolite"
      - git push
      - cp ../gitolite1.conf conf/gitolite.conf
      - git add conf/gitolite.conf
      - git commit -m "Add basic repo configuration"
      - git push
      - cp ../gitolite2.conf conf/gitolite.conf
      - git add conf/gitolite.conf
      - git commit -m "Add salt repositories"
      - git push
      - rm -f ../sentinel

gitjail_commit_repo_cfg:
  cmd.run:
    - name: 'iocage exec gitjail "cd /usr/home/repomaster/gitolite-admin && sudo -u repomaster sh addcommitpush.sh"'
    - onlyif: test -e /iocage/tags/gitjail/root/usr/home/repomaster/sentinel
Next steps: Now I must finish the saltmaster configuration and ensure that it can actually access the the states, files and the pillar via gitfs. Then the production states, files and pillar pieces can be imported. As soon as all that is in place, the bootstrapping process is done and SaltStack can be used.
ASX
Posts: 988
Joined: Wed May 06, 2015 12:46 pm

Re: SaltStack states

Post by ASX »

kraileth wrote:Finally managed to set aside a few hours early in the morning to get back to working on salt! And I made some nice progress, too.
I'm glad for you.
1) I've removed configuring the "latest" repo since I found out that the new iocage won't be available as a binary package (due to it depending on current python36 while FreeBSD's default version is an older one) and thus must be built from ports. So the bootstrapping process now puts the ports tree in place and builds iocage.
I would stay well away from software that depend on "latest and greatest" things, depending on those type of software in my experience has always been a nightmare, for me at least.
Next steps: Now I must finish the saltmaster configuration and ensure that it can actually access the the states, files and the pillar via gitfs. Then the production states, files and pillar pieces can be imported. As soon as all that is in place, the bootstrapping process is done and SaltStack can be used.
I continue to fail to see the usefullness of such software in a context with no less than 10 or more nearly identical setup ... but may be it is just me.
kraileth
Posts: 312
Joined: Sun Sep 04, 2016 12:30 pm

Re: SaltStack states

Post by kraileth »

ASX wrote:I would stay well away from software that depend on "latest and greatest" things, depending on those type of software in my experience has always been a nightmare, for me at least.
In general I tend to agree. But it depends a bit on what the targeted goal is.

1) If I were aiming at a production environment where every minute of downtime could be ruinous, I would definitely try to make a save bet. Here on the other hand we have an open source project that is of a non-commercial nature and driven by interest in technology and the idea of learning something by commiting to it. We haven't had proper backups for how long now? If one of the services that I would like to use dies and I have trouble to get it back online, fighting two days until it runs again, that's stressful and certainly not ideal, but it will neither kill the project (GhostBSD did fine without it before after all!) nor will it ruin my life.

2) Working with some project that has a much broader base than GhostBSD would make it far more difficult to experiment with such things. Just think how much of a fight it was for the BSDs to abandon CVS even though there were mature and stable alternatives that are technically superior! We have the big benefit of not having to replace something that's already working which would mean to put a lot of pressure on the replacement to not only fit the needs but to do so quickly. Instead we have the freedom to tinker around with things, gain experience and maybe even start over in a slightly different way due to the knowledge gained. Once we reach a state where we declare it "production ready" it will be much harder to make changes later. For that reason I chose to start with the currently latest version available - and plan to likely stick with that for a while. Starting with something that is obsolete when we figured out how to use it properly is a much worse approach IMO. And going for it now that the project is small enough to not actually need it is surely much easier than it would be when it was already a lot bigger. We can start slowly and relax while getting used to it over time. And that's pretty nice!
I continue to fail to see the usefullness of such software in a context with no less than 10 or more nearly identical setup ... but may be it is just me.
It's not just you. If I hadn't worked with configuration management before, I don't think anybody who knows it well would suggest using it for a project of our size. It's simply a massive topic and there's a lot to learn if you want to do it right. Taking the time to get into it all by yourself would be a waste of time. The biggest benefit starts to show when the project scales up way beyond what we can possibly hope for in the next years. All that is true.

But... I have some experience with this and do not have to do all the reading anymore. I'm also not suggesting an "all or nothing" approach. We can keep things fairly basic. This makes the whole thing much more feasible. And then there are benefits to it: It's easily repeatable in case of a server change. It's a "do it once" kind of thing that protects us from human error as soon as we have a working configuration. It allows for consistent, versioned system configuration that is nicely documented by commit messages - which is a HUGE plus! Even with just two servers there are services like e.g. backup which start out configured the same but with manual ad-hoc administration WILL diverge over time. This is unnecessary if done right. And with using a central repo it's faster to make changes even to multiple nodes than it would be to change one by hand. Using configuration management scales to just any need regarding team size. And it's completely modular. That means we don't need multiple servers of nearly identical roles. They can have entirely different purposes. Still there are a lot of things that they have in common. Users being just one example. Creating users by hand on multiple systems is tedious and error-prone. And it can even lead to side-effects: What if you created all the users correctly but in different order? Then you copy files over and they belong to the wrong user because the UIDs are different among the systems? Things like that happen all the time and they totally shouldn't. Configuration management can be one answer to problems like that.

It's fine that you are sceptical. Theory is one thing. I admit that despite accepting some of the benefits I thought that I could well cope without making use of all of this. After working as an admin for some time now, I know that I was totally wrong. Today I manage even my desktop machine (!) using configuration management. Yes, it makes enough sense for me, even for a single machine with unique configuration! I hope that I'll have a working system ready in the near future to actually show off how it works and I'm pretty sure that "hands on" experience will be more convincing than me writing another book. :P
User avatar
ericbsd
Developer
Posts: 2125
Joined: Mon Nov 19, 2012 7:54 pm

Re: SaltStack states

Post by ericbsd »

I do not know if I would like that in the web or builder server. Just to confirm doing backup those jails is what I need, the whole website is in jail and if I am not mistaking the builder server is setup to build pkg in jail. In the case of a hardware failure, those jails can be moved to any server at any time, and if we have to reinstall FreeBSD, It should take less than 40 minutes to set the website back up.

I am starting to appreciate jails very much, to the point that starting to think of a lot thing I can do with it. It is even possible to run X in a jail.
kraileth
Posts: 312
Joined: Sun Sep 04, 2016 12:30 pm

Re: SaltStack states

Post by kraileth »

ericbsd wrote:I do not know if I would like that in the web or builder server. Just to confirm doing backup those jails is what I need, the whole website is in jail and if I am not mistaking the builder server is setup to build pkg in jail. In the case of a hardware failure, those jails can be moved to any server at any time, and if we have to reinstall FreeBSD, It should take less than 40 minutes to set the website back up.
I intended to have the salt minion daemon running on everything. But how about this: We can start by adding new jails for new tasks by hand, install Saltstack in there and let it build whatever that jail is meant for (backup master, mailserver, ...). If you really dislike it in the end, scrap the jail(s) and there will be no trace left of it. I designed the whole thing to happen inside jails. Even the saltmaster has its own jail. So that should be absolutely no problem.
I am starting to appreciate jails very much, to the point that starting to think of a lot thing I can do with it. It is even possible to run X in a jail.
Yes, jails are great and have been a very attractive feature of FreeBSD for quite a while. Given more time I'd also have tried out something like the "jail your Firefox!" tutorials out there. Great stuff!
ASX
Posts: 988
Joined: Wed May 06, 2015 12:46 pm

Re: SaltStack states

Post by ASX »

About the builder machine:
- synth doesn NOT run in jail, marino@ advice to not use jails (for performance reasons), synth build a clean environment by using chroot, however the buildlogs servers, run in a jail.

So far we made two migration of the buildserver, a first time from our webserver to the machine provided from your company;
in this case among other things we migrated from a ZFS setup to a UFS based one, I doubt saltstack could have managed that as required.

a second time from the machine provided from your company to a new OVH server, and in this case we switched from a 3 disk setup to a 2 disk setup, again I doubt it would have been managed automatically.
I intended to have the salt minion daemon running on everything
No, first you explain the benefits if any, then we will see. We don't need additional complications, we are already a small team, already busy enough without the need for additional complexity.
kraileth
Posts: 312
Joined: Sun Sep 04, 2016 12:30 pm

Re: SaltStack states

Post by kraileth »

ASX wrote:So far we made two migration of the buildserver, a first time from our webserver to the machine provided from your company;
in this case among other things we migrated from a ZFS setup to a UFS based one, I doubt saltstack could have managed that as required.

a second time from the machine provided from your company to a new OVH server, and in this case we switched from a 3 disk setup to a 2 disk setup, again I doubt it would have been managed automatically.
SaltStack does configuration management. It enforces states. As such it does not care about things like disk layout. The only thing that I imagine would change in regards to Synth is some paths in its configuration file to put various things in different locations. That's changing one config file, making a commit and push. Unless I'm forgetting something, nothing else needs to change.
I intended to have the salt minion daemon running on everything
No, first you explain the benefits if any, then we will see. We don't need additional complications, we are already a small team, already busy enough without the need for additional complexity.
Mind the tense! I said that I intended to do it this way. I did not see any major resistance coming since server management was more or less a neglected area in GhostBSD and I would not have dreamed that doing things right could be opposed. What I deemed unlikely has happened though, and that's fine. That's why I currently plan to introduce it in a much, much smaller scale, beginning with just one or two service jails that are managed. Should I really fail to convince you with that, I will cancel it, clean up and withdraw from infrastructure stuff.
ASX
Posts: 988
Joined: Wed May 06, 2015 12:46 pm

Re: SaltStack states

Post by ASX »

kraileth wrote:Mind the tense! I said that I intended to do it this way. I did not see any major resistance coming since server management was more or less a neglected area in GhostBSD and I would not have dreamed that doing things right could be opposed.
No tense, my English is far less descriptive than what I would like, as a result my sentences may sound like that.

But the meaning is correct, so far I fail to see any real benefit from saltstack adoption in this context.
Setup a system, for us in GhostBSD is mostly a one time job, and when that would need to be repeated it is very likely that there will be differences, one way or the other.

Maintaining the saltstack setup, in this case, to me look like an unneeded addtional work. I doubt you will be able to change my mind. ;)

(peraphs I'm already ìimprinted', in fact I built my IT career on top of customization and flexibility: custom systems, custom software, ad hoc services, fast responses and so on.)

I understood you work in a datacenter, I guess that there the use of saltstack could be much more productive, but we are a tiny project with a minimal infrastructure, quite a different environment.
Post Reply