Updated on 23 October 2020 - Better solution described in Persistent NFS mount points on macOS.
In a previous blog post “Automount NFS on macOS” I wrote about how to mount NFS shares. Unfortunately the solution was not reboot proof. On each reboot the OS resets the
auto_master to its default state.
As a workaround I created a script to add the missing lines to the
auto_master. The script needs to be run manually with sudo rights. The PAF-factor is below zero on that one (PAF = partner acceptance factor).
After manually fixing it too often I finally found a way to automate it. It’s still a workaround for the reset of the
auto_master file but solving that underlying problem is simply out of my reach.
I found a way to run the script with sudo rights at boot time. As a bonus it’s a native macOS feature. Enter the world of launchd.
The partial Wikipedia launchd definition:
“launchd is an init and operating system service management daemon …”
Some tasks run as a local user (agents) and other task run as root user (daemons). The tasks are defined in plist files and the location of these plist files determines with what credentials they run.
An overview of the
|Type||Location||Run on behalf of|
|User Agents||~/Library/LaunchAgents||Currently logged in user|
|Global Agents||/Library/LaunchAgents||Currently logged in user|
|Global Daemons||/Library/LaunchDaemons||root or the user specified with the key UserName|
|System Agents||/System/Library/LaunchAgents||Currently logged in user|
|System Daemons||/System/Library/LaunchDaemons||root or the user specified with the key UserName|
In order to run a script with sudo rights we have to call it from a plist-file in the directory ‘/Library/LaunchDaemons’.
There are many plist-configuration keys, in our case we only need three:
- Label, defines a unique identifier for the launchd instance
- Program, defines what to start
- RunAtLoad, defines when the job should be run
To run our “workaround” script at boot-time:
- Label, who (the identifier): “org.tisgoud.restore_nfs_mount.plist”
- Program, what, (the script-path): “/Users/[your username]/Scripts/restore_nfs_mount.sh”
- RunAtLoad, when, (time to run): “RunAtLoad”
The plist-file is usually identified by its label. To run it as root it’s created in the directory ‘/Library/Launchdaemons’.
$ touch /Libray/Launchdaemons/org.tisgoud.restore_nfs_mount.plist
The plist-file uses the xml-format.
cat ./org.tisgoud.restore_nfs_mount.plist <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>org.tisgoud.restore_nfs_mount.plist</string> <key>ProgramArguments</key> <array> <string>/bin/sh</string> <string>/Users/[your username]/Scripts/restore_nfs_mount.sh</string> </array> <key>RunAtLoad</key> <true/> </dict> </plist>
Modify the path with “[your username]” to wherever your scripts are located. The location above is my personal preference.
As a final step set the file permissions:
$ chmod 644 /Libray/Launchdaemons/org.tisgoud.restore_nfs_mount.plist
In the Automount NFS blogpost I added the script as an update. Since then I renamed the script to “restore_nfs_mount.sh”
The workaround script checks the file ‘/etc/auto_master’ for the string “auto_nfs”, the ‘include’-file with the nfs-mountpoints.
In case the string “auto_nfs” is not found it will be added (») add the end of the file. Not finding “auto_nfs” indicates a reset of the file
auto_master probably caused by a reboot.
Automount is called to update the mountpoints defined in auto_master and now with the mountpoints define in
automount reads the /etc/auto_master file, and any local or network maps it includes, and mounts autofs on the appropriate mount points to cause mounts to be triggered. It will also attempt to unmount any top-level autofs mounts that correspond to maps no longer found.
#!/bin/bash if ! grep '/etc/auto_master' -e 'auto_nfs'; then echo "/-\t\t\tauto_nfs" >> /etc/auto_master automount -cv fi
$ chmod 744 /Users/tisgoud/Scripts/restore_nfs_mount.sh
After running the script the “auto_nfs” line is added to the
$ cat /etc/auto_master # # Automounter master map # +auto_master # Use directory service #/net -hosts -nobrowse,hidefromfinder,nosuid /home auto_home -nobrowse,hidefromfinder /Network/Servers -fstab /- -static /- auto_nfs
The NFS-shares on my Synology and mountpoints in
$ cat /etc/auto_nfs # Shared family eBook library /System/Volumes/Data/Synology/calibre -fstype=nfs,nolockd,resvport,hard,bg,intr,rw,tcp,nfc,rsize=65536,wsize=65536 nfs://192.168.200.200:/volume1/calibre # Access to the Docker volumes /System/Volumes/Data/Synology/docker -fstype=nfs,nolockd,resvport,hard,bg,intr,rw,tcp,nfc,rsize=65536,wsize=65536 nfs://192.168.200.200:/volume1/docker # Webserver on my Synology /System/Volumes/Data/Synology/web -fstype=nfs,nolockd,resvport,hard,bg,intr,rw,tcp,nfc,rsize=65536,wsize=65536 nfs://192.168.200.200:/volume1/web
After writing the “Automount NFS” I was asked if this also workes for SMB shares.
In the same way NFS mountpoints are included in the
auto_master also SMB shares can be mounted.
$ cat /etc/auto_master # # Automounter master map # +auto_master # Use directory service #/net -hosts -nobrowse,hidefromfinder,nosuid /home auto_home -nobrowse,hidefromfinder /Network/Servers -fstab /- -static /- auto_nfs /- auto_smb
The syntax for the SMB shares is different. A big difference is that you have to add your credentials. These credentials are stored in plain text. Depending on the situation, the risk could be acceptable but in most cases it is not.
I tested it with the following
$ cat /etc/auto_smb /System/Volumes/Data/Synology/calibre -fstype=smbfs ://admin:NotMyRealPemail@example.com:/calibre # Use the Hex ASCII value %40 for an `@` # Use the Hex ASCII value %21 for an `!`
In the password all special characters need to be replaced by their Hex ASCII values.
“NotMyRealP@ssword!” => “NotMyRealP%40ssword%21”
In the current version of the workaround script the “SMB” part is not included but can easily be added.
When all files are in place you can verify the whole chain by running the launchd task manually:
$ launchctl load /Library/LaunchDaemons/org.tisgoud.restore_nfs_mount.plist
No response means no errors, usually a good sign 😉.
With the following command you can check if the task is picked up by launchd:
$ launchctl list | grep 'org.tisgoud.restore_nfs_mount' - 0 org.tisgoud.restore_nfs_mount.plist
And with that we know the launchd task is running.
The final test would be a reboot to verify the restored mountpoints.
At first I was a bit reluctant to go for the launchd solution but the PAF is 👍. The current solution uses native OS features and should be future proof.
Some food for thought and a possible improvement could be to go for a solution where launchd kicks of a script that would do the mounting directly instead of using automount and auto_master 🤔.