Armored Node for Sensitive Data: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
 
(34 intermediate revisions by 7 users not shown)
Line 7: Line 7:
* The solution does not protect against user errors during the setup of the secure environment. Please ensure that you follow this documentation with extreme care. Failing to do so could result in an insecure environment.
* The solution does not protect against user errors during the setup of the secure environment. Please ensure that you follow this documentation with extreme care. Failing to do so could result in an insecure environment.
* The solution does not protect against user errors that could result in transferring sensitive data outside the secure environment (the Internet is reachable from the secure environment). Please ensure that you use this environment with care.
* The solution does not protect against user errors that could result in transferring sensitive data outside the secure environment (the Internet is reachable from the secure environment). Please ensure that you use this environment with care.
* The solution does not protect the rest of Grid'5000 against your node. Before using this solution to work on software that might attack other Grid'5000 machines (for example malwares), please consult with the Grid'5000 technical staff.
* The solution does not protect the rest of Grid'5000 against your node. Before using this solution to work on software that might attack other Grid'5000 machines (for example malware), please consult with the Grid'5000 technical staff.


== Informing the technical team ==
== Informing the technical team ==
Line 18: Line 18:
* the expected duration of your work
* the expected duration of your work


== Node reservation and deployment ==
== Node reservation, deployment, and securisation ==
Identify your requirements:
 
=== Identify your requirements ===
* Select a cluster that suits your needs (for example using the [[Hardware]] page).
* Select a cluster that suits your needs (for example using the [[Hardware]] page).
* Estimate for how long you will need the resources. If they exceed what is allowed for the ''default'' queue in the [[Grid5000:UsagePolicy|Usage Policy]], maybe the ''production'' queue will match your needs. If the duration also exceeds what is allowed by the ''production'' queue (more than one week), you can follow the procedure explained on the [[Grid5000:UsagePolicy|Usage Policy]] page to request an exception.
* Estimate for how long you will need the resources. If they exceed what is allowed for the ''default'' queue in the [[Grid5000:UsagePolicy|Usage Policy]], maybe the ''production'' queue will match your needs. If the duration also exceeds what is allowed by the ''production'' queue (more than one week), you can follow the procedure explained on the [[Grid5000:UsagePolicy|Usage Policy]] page to request an exception.
Line 25: Line 26:
* Reserve a node and a VLAN, then deploy the node with the debian11-x64-big environment inside the VLAN (see detailed steps below).
* Reserve a node and a VLAN, then deploy the node with the debian11-x64-big environment inside the VLAN (see detailed steps below).


=== Detailed steps for reservation and deployment ===
=== Reserve and setup your node (option 1: manually) ===
; Make a reservation
 
Reserve the node and the VLAN. Example for a reservation in the production queue for one node of cluster CLUSTER starting at START DATE for a duration of WALLTIME:
Reserve the node and the VLAN. Example for a reservation in the production queue for one node of cluster CLUSTER starting at START DATE for a duration of WALLTIME:
  <code class="host">nancy frontend:</code><code class="command">oarsub</code> -q production -t deploy -t destructive -l {"type='kavlan'"}/vlan=1+{"cluster='<code class="replace">CLUSTER</code>'"}/nodes=1,walltime=<code class="replace">WALLTIME</code> -r <code class="replace">START DATE</code>
  <code class="host">nancy frontend:</code><code class="command">oarsub</code> -q production -t deploy -t destructive -l {"type='kavlan'"}/vlan=1+{<code class="replace">CLUSTER</code>}/nodes=1,walltime=<code class="replace">WALLTIME</code> -r <code class="replace">START DATE</code>


'''FIXME: mention reserving additional disks'''
Note that additional disks available on the node ([[Disk reservation#Reserve_disks_and_nodes_at_the_same_time|that may need an extra reservation]]) will be used as additional secured storage space, but data will always be destroyed at the end of the node reservation.


Once the job has started, connect inside the job:
Once the job has started, connect inside the job:
Line 40: Line 43:
Take note of  the assigned VLAN number:
Take note of  the assigned VLAN number:
  <code class="host">frontend:</code><code class="command">kavlan</code> -V
  <code class="host">frontend:</code><code class="command">kavlan</code> -V
; Deploy the debian11-x64-big environment


Deploy the node with the debian11-x64-big environment, inside the VLAN:
Deploy the node with the debian11-x64-big environment, inside the VLAN:
  <code class="host">frontend:</code><code class="command">kadeploy3</code> -e debian11-x64-big -f $OAR_NODEFILE --vlan `kavlan -V` -k
  <code class="host">frontend:</code><code class="command">kadeploy3</code> -e debian11-x64-big --vlan `kavlan -V`
Now wait for the deployment to complete.
Now wait for the deployment to complete.


== Securing the node with g5k-armor-node.py ==
; Securing the node with g5k-armor-node.py
Connect to the node from the outside of Grid'5000, using the node's hostname in the VLAN (hostname with the Kavlan suffix for the reserved VLAN, because the node was deployed inside the kavlan VLAN). After securing the node, this will be the only allowed way to connect to the node, as SSH will only be authorized from Grid'5000 access machines:
Connect to the node from the outside of Grid'5000, using the node's hostname in the VLAN (hostname with the Kavlan suffix for the reserved VLAN, because the node was deployed inside the kavlan VLAN). After securing the node, this will be the only allowed way to connect to the node, as SSH will only be authorized from Grid'5000 access machines:
  <code class="host">your machine:</code><code class="command">ssh</code> -J <code class="replace">YOUR_G5K_LOGIN</code>@access.grid5000.fr root@<code class="replace">node-X-kavlan-Y.site</code>.grid5000.fr
  <code class="host">your machine:</code><code class="command">ssh</code> -J <code class="replace">YOUR_G5K_LOGIN</code>@access.grid5000.fr root@<code class="replace">node-X-kavlan-Y.site</code>.grid5000.fr
Line 55: Line 60:
  <code class="host">node:</code><code class="command">./g5k-armor-node.py</code>
  <code class="host">node:</code><code class="command">./g5k-armor-node.py</code>


Wait for the script to finish (it must displayed the <code>Setup completed successfully!</code> message).
Wait for the script to finish (it must have displayed the <code>Setup completed successfully!</code> message).


Disconnect from the node, and try to connect again using SSH.
Disconnect from the node, and try to connect again using SSH.
You should get an error message from SSH, because the node's host key changed. This is expected: the script replaced the node's SSH host key with a newly generated one. Follow the instructions from SSH to remove the old key.
You should get an error message from SSH, because the node's host key changed. This is expected: the script replaced the node's SSH host key with a newly generated one. Follow the instructions from SSH to remove the old key.
=== Reserve and setup your node (option 2: automated with grd) ===
'''grd''' is a command-line tool to automate Grid'5000 workflows. It has support for performing the steps above. For example, to reserve and configure a node from the grappe cluster in nancy for 2 hours, use (from a frontend or locally, after installing ruby-cute which contains grd):
<code class="host">frontend:</code><code class="command">grd</code> bs -s nancy -q production -l {grappe}/nodes=1+{"type='kavlan'"}/vlan=1 -w 2 --armor
As described above, you might get an error message from SSH, because the node's host key changed. This is expected: the script replaced the node's SSH host key with a newly generated one. Follow the instructions from SSH to remove the old key.


== Using the secured node ==
== Using the secured node ==
=== Connect the secured node ===
You must connect to the node using your Grid'5000 login directly from your workstation:
You must connect to the node using your Grid'5000 login directly from your workstation:
  <code class="host">your machine:</code><code class="command">ssh</code> -J <code class="replace">YOUR_G5K_LOGIN</code>@access.grid5000.fr <code class="replace">YOUR_G5K_LOGIN</code>@<code class="replace">node-X-kavlan-Y.site</code>.grid5000.fr
  <code class="host">your machine:</code><code class="command">ssh</code> -J <code class="replace">YOUR_G5K_LOGIN</code>@access.grid5000.fr <code class="replace">YOUR_G5K_LOGIN</code>@<code class="replace">node-X-kavlan-Y.site</code>.grid5000.fr
Line 67: Line 81:


Please remember that:
Please remember that:
* Only your '''home directory is encrypted''' (<code>/home/<username></code>). You must not store sensitive data outside of it (or on other Grid'5000 machines).
* Only your '''home directory on the secured node is encrypted''' (<code>/home/<username></code>). You must not store sensitive data outside of it (or on other Grid'5000 machines).
* You must only use secured protocols to transfer data to/from the node as described below.
* You must only use secured protocols to transfer data to/from the node as described below.
* If you reboot the node or if the node is shut down for some reason, you will no longer be able to access your data. However, if you made a copy of the encryption key when it was displayed at the end of the script's output, you can restore the encrypted storage from the node with:
* If you reboot the node or if the node is shut down for some reason, you will no longer be able to access your data. However, if you made a copy of the encryption key when it was displayed at the end of the script's output, you can restore the encrypted storage from the node with:
  echo '<paste key content here>' > /run/user/1000/key
  echo '<paste key content here>' > /run/user/1000/key
  sudo cryptsetup luksOpen --key-file /run/user/1000/key /dev/sda5 encrypted
  sudo cryptsetup luksOpen --key-file /run/user/1000/key /dev/mapper/vg-data encrypted
  sudo mount /dev/mapper/encrypted $HOME
  sudo mount /dev/mapper/encrypted $HOME
  exit
  exit
Line 77: Line 91:
Then reconnect to the node.
Then reconnect to the node.


If you prefer to avoid keeping a copy of the encryption key, it is a good idea to make intermediary backups of the processed data, in case the secured node becomes unreachable during the processing.
If you prefer to avoid keeping a copy of the encryption key, it is a good idea to make intermediary backups of the processed data (outside of Grid'5000), in case the secured node becomes unreachable during the processing.


=== Transferring data to/from the node ===
=== Transferring data to/from the node ===
Line 87: Line 101:
* To fetch files from the node:
* To fetch files from the node:
  rsync -e "ssh -J <code class="replace">YOUR_G5K_LOGIN</code>@access.grid5000.fr" <code class="replace">YOUR_G5K_LOGIN</code>@<code class="replace">node-X-kavlan-Y.site</code>.grid5000.fr:<code class="replace"><remote path></code> <code class="replace"><local path></code>
  rsync -e "ssh -J <code class="replace">YOUR_G5K_LOGIN</code>@access.grid5000.fr" <code class="replace">YOUR_G5K_LOGIN</code>@<code class="replace">node-X-kavlan-Y.site</code>.grid5000.fr:<code class="replace"><remote path></code> <code class="replace"><local path></code>
== Data management ==
Several solutions are possible to manage the sensitive you need to use on the node.
=== Solution A: Storing Data Outside Grid'5000 ===
You could store the data in a secure storage space outside Grid'5000, and copy it to/from the node, as described above, when needed.
'''Main limitation of this solution:''' it is is not suitable if the data volume is important (because of the transfer time).
=== Solution B: Storing Data In An Encrypted Archive Inside Grid'5000 ===
Assuming you have previously provisioned an Armored Node as outlined in the [[Armored Node for Sensitive Data#Node_reservation,_deployment,_and_securisation| guide]], and have transferred sensitive data within an AES128 encrypted archive, as described in [[Armored Node for Sensitive Data#Transferring_data_to/from_the_node|the data transfer section]], please follow these steps:
* On the Armored Node, install 7z by running the following command:
<code class="host">node:</code><code class="command">sudo apt install 7zip</code>
* Once in the encrypted home directory on the Armored Node, decompress your encrypted archive using your predefined password with the following command:
<code class="host">node:</code><code class="command">7zz x sensitive_data.7z</code>
* Before storing your derived sensitive data within Grid'5000 but outside the secured node, ensure to encrypt and compress them along with your password as follows:
<code class="host">node:</code><code class="command">7zz a -p -mhe=on -mx=9 -m0=lzma2 -mtc=on -mtm=on -mta=on sensitive_data_derived.7z sensitive_data/</code>
Using the .7z format, files are encrypted with AES-256 encryption by default. Please take note of the crucial encryption options used in the command above, and distinguish them from other useful options:
* Encryption-specific options (Highly important):
<code>-p</code>: This option prompts for a password during extraction. It's important for securing the encrypted data with a password.
<code>-mhe=on</code>: This option enables encryption for the archive header, so that no one can see your file names in the archive file before entering the password. It enhances the data privacy.
* Other options related to compression:
<code>-mx=9</code>: Compress to the highest level (9). This reduce the size of the encrypted data.
<code>-m0=lzma2</code>: Compress using the "LZMA2" methode, which is an lossless data compression algorithm. This optimize disk space usage.
<code>-mtc=on -mtm=on -mta=on</code>: Compress using multi-threading. This allows to speed up the compression process.
For more details on the 7zip file archiver, you can refer to [https://manpages.debian.org/bullseye-backports/7zip/7zz.1.en.html the man page on Debian] and [https://7zip.bugaco.com/7zip/MANUAL/cmdline/switches/method.htm this compression manuel].
'''Main limitation of this solution:''' it is not very practical because of the frequent need for decrypting and decompressing, as well as compressing and encrypting data.
=== Solution C: Using a Remote Secured Volume (CompuVault) ===
==== Initial Setup ====
# '''User Requests Storage Space Creation''': The user requests the creation of a CompuValut storage space from the Grid5000 technical team, specifying the required volume.
# '''Technical Team Sets Up Storage''': Following [[User:Ychi/compuVault|this guide]], the technical team creates the storage space and sets up an iSCSI export protected by a pair of '''login/password'''. The parameters (server address, export name, project name and iSCSI login/password) are communicated to the user within a '''''compuVault_config.json''''' file, in a confidential manner.
# '''User Configures Armored Node''': Please refer to [[Armored Node for Sensitive Data#Node_reservation,_deployment,_and_securisation| the guide above]] to provision one Armored Node.
# '''User Mounts and Encrypts Storage''': The user mounts the storage space on the node via iSCSI and encrypts it with LUKS. The user retains the '''passphrase''' used for LUKS encryption.
Please note that the encryption passphrase is '''different''' from the iSCSI password and '''is known only by the user'''.
{{Warning|text=Since you will be dealing with decrypted sensitive data, please follow these steps carefully and ensure you have backups of the processed data in a secured way, in case the secured node becomes unreachable accidently.}}
* Please make sure to transfer the compuVault_config.json received from the technical team, to your home dir on the Armored Node:
<code class="host">your machine:</code><code class="command">scp</code> -J <code class="replace">YOUR_G5K_LOGIN</code>@access.grid5000.fr <code class="replace"><local path></code>/compuVault_config.json <code class="replace">YOUR_G5K_LOGIN</code>@<code class="replace">node-X-kavlan-Y.site</code>.grid5000.fr:/home/<code class="replace">YOUR_G5K_LOGIN</code>
* Connect to the Armored Node, as mentioned [[Armored Node for Sensitive Data#Connect_the_secured_node|above]].
* On the Armored Node, download init-compuVault.py, for example with:
<code class="host">node:</code><code class="command">wget</code> https://gitlab.inria.fr/grid5000/g5k-armor/-/raw/compuVault/init-compuVault.py
Run it:
<code class="host">node:</code><code class="command">chmod</code> a+rx init-compuVault.py
<code class="host">node:</code><code class="command">./init-compuVault.py</code>
Wait for the script to finish executing; it should display the <code>Init compuVault completed successfully!</code> message. Afterward, you will see the mounted encrypted storage.
* You can transfer your sensitive data to the encrypted volume, following [[Armored_Node_for_Sensitive_Data#Transferring_data_to/from_the_node|the guide above]], conduct your experiment, and your encrypted data will be stored in the remote secured volume.
==== Subsequent Usages ====
# '''User Configures Armored Node''': Please refer to [[Armored Node for Sensitive Data#Node_reservation,_deployment,_and_securisation| the guide above]] to provision an Armored Node.
# '''User Mounts and Decrypts Storage''': The user mounts the storage space on the node via iSCSI and decrypts it with the '''passphrase''' entered for LUKS encryption.
* Please make sure to transfer the compuVault_config.json received by the technical team, to your home dir on the Armored Node:
<code class="host">your machine:</code><code class="command">scp</code> -J <code class="replace">YOUR_G5K_LOGIN</code>@access.grid5000.fr <code class="replace"><local path></code>/compuVault_config.json <code class="replace">YOUR_G5K_LOGIN</code>@<code class="replace">node-X-kavlan-Y.site</code>.grid5000.fr:/home/<code class="replace">YOUR_G5K_LOGIN</code>
* Connect to the Armored Node, as mentioned [[Armored Node for Sensitive Data#Connect_the_secured_node|above]].
* On the Armored Node, download mount-compuVault.py, for example with:
<code class="host">node:</code><code class="command">wget</code> https://gitlab.inria.fr/grid5000/g5k-armor/-/raw/compuVault/mount-compuVault.py
Run it:
<code class="host">node:</code><code class="command">chmod</code> a+rx mount-compuVault.py
<code class="host">node:</code><code class="command">./mount-compuVault.py</code>
Wait for the script to finish executing; it should display the <code>Mount compuVault completed successfully!</code> message. Afterward, you will see the mounted encrypted storage and continue your work.
'''Disclaimer of this solution''': Please provision only one Armored Node at a time for your remote secured storage, otherwise, you may encounter synchronization problems between multiple Armored Nodes. If you require multiple Armored Node simultaneously, please discuss your use case with the technical team.
== Troubleshooting ==
{{Warning|text=If you experience any issue during the securisation procedure, do not continue further your experiment. The node might not be correctly secured, and thus your data not well protected.}}
=== Rerun the securing procedure from the begining ===
You can try to rerun all the procedure from the begining execept that you do not need to execute the oarsub command (if the job is still running).
Connect the frontend (the one, you previously used) and connect inside the job:
<code class="host">frontend:</code><code class="command">oarsub</code> -C <code class="replace">JOB ID</code>
Format the node and deploy debian11-x64-big environment on it:
<code class="host">frontend:</code><code class="command">kadeploy3</code> -e debian11-x64-big --vlan `kavlan -V`
Finally, download and execute the python script as described in the [[Armored_Node_for_Sensitive_Data#Securing_the_node_with_g5k-armor-node.py|"Securing the node with g5k-armor-node.py" section]].
=== Have more output information on each step for debugging ===
If you still experience an issue during the procedure, you might want to display more output information for debbuging and understanding the issue.
To do so, do the following:
* for the "deploying debian11-x64-big environment" step, add <code>--verbose-level 5</code>
<code class="host">frontend:</code><code class="command">kadeploy3</code> -e debian11-x64-big --vlan `kavlan -V` --verbose-level 5
* for the "securing script" step, set the environment variable <code>GAN_DEBUG</code> to 1 :
<code class="host">node:</code><code class="command">GAN_DEBUG=1 ./g5k-armor-node.py</code>
=== Contact the technical team ===
If the script is not working properly or you have any question on the procedure, do not hesitate to contact the technical team BEFORE running any experiment with sensitive data: support-staff@lists.grid5000.fr. Please include all relevant details that could help the technical team to understand your problem (do NOT send any sensitive data by mail).
For instance, if the script <code>g5k-armor-node.py</code> is not working properly, please run it with the debug mode (see the previous section) and copy/paste potential error messages on the email you will send to the technical team.
== Extending node reservation beyond normal limits ==
A limitation of this solution is the frequent need for setting up the node and importing the required data.
A way to mitigate this problem is to extend the reservations beyond what is normally allowed by Grid5000 policies (7 days max). However :
* This add constraints for maintenances for the Grid5000 technical team
* It is generally considered a bad practice to reserve resources (which prevents other users from using them) and then not use them
If really needed, this possibility should be discussed with the user's security correspondant and with the Grid5000 technical team. A prerequisite for this discussion is that the user clarify the hardware that could match its needs, using for example the [[Hardware]] page.

Latest revision as of 17:34, 19 April 2024

Note.png Note

This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team.

This page documents how to secure a Grid'5000 node, making it suitable to host and process sensitive data. The process is based on a tool (g5k-armor-node.py) that runs on top of the debian11-x64-big Grid'5000 environment.

Important limitations about this solution

  • The solution does not protect against user errors during the setup of the secure environment. Please ensure that you follow this documentation with extreme care. Failing to do so could result in an insecure environment.
  • The solution does not protect against user errors that could result in transferring sensitive data outside the secure environment (the Internet is reachable from the secure environment). Please ensure that you use this environment with care.
  • The solution does not protect the rest of Grid'5000 against your node. Before using this solution to work on software that might attack other Grid'5000 machines (for example malware), please consult with the Grid'5000 technical staff.

Informing the technical team

Before starting to use Grid'5000 to process sensitive data, inform the technical team that you are going to do so. Email support-staff@lists.grid5000.fr with the following information:

  • your name
  • your affiliation
  • the general description of your planned work and the kind of data that you are going to process (do not include sensitive information here)
  • the description of the resources that you are going to reserve
  • the expected duration of your work

Node reservation, deployment, and securisation

Identify your requirements

  • Select a cluster that suits your needs (for example using the Hardware page).
  • Estimate for how long you will need the resources. If they exceed what is allowed for the default queue in the Usage Policy, maybe the production queue will match your needs. If the duration also exceeds what is allowed by the production queue (more than one week), you can follow the procedure explained on the Usage Policy page to request an exception.
  • Take into consideration that all data (including data you produced) stored locally on the machine will be destroyed at the end of the reservation.
  • Reserve a node and a VLAN, then deploy the node with the debian11-x64-big environment inside the VLAN (see detailed steps below).

Reserve and setup your node (option 1: manually)

Make a reservation

Reserve the node and the VLAN. Example for a reservation in the production queue for one node of cluster CLUSTER starting at START DATE for a duration of WALLTIME:

nancy frontend:oarsub -q production -t deploy -t destructive -l {"type='kavlan'"}/vlan=1+{CLUSTER}/nodes=1,walltime=WALLTIME -r START DATE

Note that additional disks available on the node (that may need an extra reservation) will be used as additional secured storage space, but data will always be destroyed at the end of the node reservation.

Once the job has started, connect inside the job:

frontend:oarsub -C JOB ID

Note that since it is a deploy job, the job shell opens on the frontend again.

Take note of the hostname of the reserved node for instance with oarprint:

frontend:oarprint host

Take note of the assigned VLAN number:

frontend:kavlan -V
Deploy the debian11-x64-big environment

Deploy the node with the debian11-x64-big environment, inside the VLAN:

frontend:kadeploy3 -e debian11-x64-big --vlan `kavlan -V`

Now wait for the deployment to complete.

Securing the node with g5k-armor-node.py

Connect to the node from the outside of Grid'5000, using the node's hostname in the VLAN (hostname with the Kavlan suffix for the reserved VLAN, because the node was deployed inside the kavlan VLAN). After securing the node, this will be the only allowed way to connect to the node, as SSH will only be authorized from Grid'5000 access machines:

your machine:ssh -J YOUR_G5K_LOGIN@access.grid5000.fr root@node-X-kavlan-Y.site.grid5000.fr

On the node, download g5k-armor-node.py, for example with:

node:wget https://gitlab.inria.fr/grid5000/g5k-armor/-/raw/master/g5k-armor-node.py

Run it:

node:chmod a+rx g5k-armor-node.py
node:./g5k-armor-node.py

Wait for the script to finish (it must have displayed the Setup completed successfully! message).

Disconnect from the node, and try to connect again using SSH. You should get an error message from SSH, because the node's host key changed. This is expected: the script replaced the node's SSH host key with a newly generated one. Follow the instructions from SSH to remove the old key.

Reserve and setup your node (option 2: automated with grd)

grd is a command-line tool to automate Grid'5000 workflows. It has support for performing the steps above. For example, to reserve and configure a node from the grappe cluster in nancy for 2 hours, use (from a frontend or locally, after installing ruby-cute which contains grd):

frontend:grd bs -s nancy -q production -l {grappe}/nodes=1+{"type='kavlan'"}/vlan=1 -w 2 --armor

As described above, you might get an error message from SSH, because the node's host key changed. This is expected: the script replaced the node's SSH host key with a newly generated one. Follow the instructions from SSH to remove the old key.

Using the secured node

Connect the secured node

You must connect to the node using your Grid'5000 login directly from your workstation:

your machine:ssh -J YOUR_G5K_LOGIN@access.grid5000.fr YOUR_G5K_LOGIN@node-X-kavlan-Y.site.grid5000.fr

The node can access the Internet and you can use the sudo command on the node to install additional software if needed.

Please remember that:

  • Only your home directory on the secured node is encrypted (/home/<username>). You must not store sensitive data outside of it (or on other Grid'5000 machines).
  • You must only use secured protocols to transfer data to/from the node as described below.
  • If you reboot the node or if the node is shut down for some reason, you will no longer be able to access your data. However, if you made a copy of the encryption key when it was displayed at the end of the script's output, you can restore the encrypted storage from the node with:
echo '<paste key content here>' > /run/user/1000/key
sudo cryptsetup luksOpen --key-file /run/user/1000/key /dev/mapper/vg-data encrypted
sudo mount /dev/mapper/encrypted $HOME
exit

Then reconnect to the node.

If you prefer to avoid keeping a copy of the encryption key, it is a good idea to make intermediary backups of the processed data (outside of Grid'5000), in case the secured node becomes unreachable during the processing.

Transferring data to/from the node

You must transfer data directly between an external secure storage, and your Grid'5000 node. You must not use other Grid'5000 storage spaces (such as NFS spaces) in the process.

It is recommended to use rsync. Using rsync, you can specify access.grid5000.fr as a SSH JumpHost using the -e option. Alternatively, you can customize your SSH configuration as described in the Getting Started tutorial.

  • To transfer files to the node:
rsync -e "ssh -J YOUR_G5K_LOGIN@access.grid5000.fr" <local path> YOUR_G5K_LOGIN@node-X-kavlan-Y.site.grid5000.fr:<remote path>
  • To fetch files from the node:
rsync -e "ssh -J YOUR_G5K_LOGIN@access.grid5000.fr" YOUR_G5K_LOGIN@node-X-kavlan-Y.site.grid5000.fr:<remote path> <local path>

Data management

Several solutions are possible to manage the sensitive you need to use on the node.

Solution A: Storing Data Outside Grid'5000

You could store the data in a secure storage space outside Grid'5000, and copy it to/from the node, as described above, when needed.

Main limitation of this solution: it is is not suitable if the data volume is important (because of the transfer time).

Solution B: Storing Data In An Encrypted Archive Inside Grid'5000

Assuming you have previously provisioned an Armored Node as outlined in the guide, and have transferred sensitive data within an AES128 encrypted archive, as described in the data transfer section, please follow these steps:

  • On the Armored Node, install 7z by running the following command:
node:sudo apt install 7zip
  • Once in the encrypted home directory on the Armored Node, decompress your encrypted archive using your predefined password with the following command:
node:7zz x sensitive_data.7z
  • Before storing your derived sensitive data within Grid'5000 but outside the secured node, ensure to encrypt and compress them along with your password as follows:
node:7zz a -p -mhe=on -mx=9 -m0=lzma2 -mtc=on -mtm=on -mta=on sensitive_data_derived.7z sensitive_data/

Using the .7z format, files are encrypted with AES-256 encryption by default. Please take note of the crucial encryption options used in the command above, and distinguish them from other useful options:

  • Encryption-specific options (Highly important):

-p: This option prompts for a password during extraction. It's important for securing the encrypted data with a password.

-mhe=on: This option enables encryption for the archive header, so that no one can see your file names in the archive file before entering the password. It enhances the data privacy.

  • Other options related to compression:

-mx=9: Compress to the highest level (9). This reduce the size of the encrypted data.

-m0=lzma2: Compress using the "LZMA2" methode, which is an lossless data compression algorithm. This optimize disk space usage.

-mtc=on -mtm=on -mta=on: Compress using multi-threading. This allows to speed up the compression process.

For more details on the 7zip file archiver, you can refer to the man page on Debian and this compression manuel.


Main limitation of this solution: it is not very practical because of the frequent need for decrypting and decompressing, as well as compressing and encrypting data.

Solution C: Using a Remote Secured Volume (CompuVault)

Initial Setup

  1. User Requests Storage Space Creation: The user requests the creation of a CompuValut storage space from the Grid5000 technical team, specifying the required volume.
  2. Technical Team Sets Up Storage: Following this guide, the technical team creates the storage space and sets up an iSCSI export protected by a pair of login/password. The parameters (server address, export name, project name and iSCSI login/password) are communicated to the user within a compuVault_config.json file, in a confidential manner.
  3. User Configures Armored Node: Please refer to the guide above to provision one Armored Node.
  4. User Mounts and Encrypts Storage: The user mounts the storage space on the node via iSCSI and encrypts it with LUKS. The user retains the passphrase used for LUKS encryption.

Please note that the encryption passphrase is different from the iSCSI password and is known only by the user.

Warning.png Warning

Since you will be dealing with decrypted sensitive data, please follow these steps carefully and ensure you have backups of the processed data in a secured way, in case the secured node becomes unreachable accidently.

  • Please make sure to transfer the compuVault_config.json received from the technical team, to your home dir on the Armored Node:

your machine:scp -J YOUR_G5K_LOGIN@access.grid5000.fr <local path>/compuVault_config.json YOUR_G5K_LOGIN@node-X-kavlan-Y.site.grid5000.fr:/home/YOUR_G5K_LOGIN

  • Connect to the Armored Node, as mentioned above.
  • On the Armored Node, download init-compuVault.py, for example with:
node:wget https://gitlab.inria.fr/grid5000/g5k-armor/-/raw/compuVault/init-compuVault.py

Run it:

node:chmod a+rx init-compuVault.py
node:./init-compuVault.py

Wait for the script to finish executing; it should display the Init compuVault completed successfully! message. Afterward, you will see the mounted encrypted storage.

  • You can transfer your sensitive data to the encrypted volume, following the guide above, conduct your experiment, and your encrypted data will be stored in the remote secured volume.

Subsequent Usages

  1. User Configures Armored Node: Please refer to the guide above to provision an Armored Node.
  2. User Mounts and Decrypts Storage: The user mounts the storage space on the node via iSCSI and decrypts it with the passphrase entered for LUKS encryption.
  • Please make sure to transfer the compuVault_config.json received by the technical team, to your home dir on the Armored Node:

your machine:scp -J YOUR_G5K_LOGIN@access.grid5000.fr <local path>/compuVault_config.json YOUR_G5K_LOGIN@node-X-kavlan-Y.site.grid5000.fr:/home/YOUR_G5K_LOGIN

  • Connect to the Armored Node, as mentioned above.
  • On the Armored Node, download mount-compuVault.py, for example with:
node:wget https://gitlab.inria.fr/grid5000/g5k-armor/-/raw/compuVault/mount-compuVault.py

Run it:

node:chmod a+rx mount-compuVault.py
node:./mount-compuVault.py

Wait for the script to finish executing; it should display the Mount compuVault completed successfully! message. Afterward, you will see the mounted encrypted storage and continue your work.


Disclaimer of this solution: Please provision only one Armored Node at a time for your remote secured storage, otherwise, you may encounter synchronization problems between multiple Armored Nodes. If you require multiple Armored Node simultaneously, please discuss your use case with the technical team.

Troubleshooting

Warning.png Warning

If you experience any issue during the securisation procedure, do not continue further your experiment. The node might not be correctly secured, and thus your data not well protected.

Rerun the securing procedure from the begining

You can try to rerun all the procedure from the begining execept that you do not need to execute the oarsub command (if the job is still running).

Connect the frontend (the one, you previously used) and connect inside the job:

frontend:oarsub -C JOB ID

Format the node and deploy debian11-x64-big environment on it:

frontend:kadeploy3 -e debian11-x64-big --vlan `kavlan -V`

Finally, download and execute the python script as described in the "Securing the node with g5k-armor-node.py" section.

Have more output information on each step for debugging

If you still experience an issue during the procedure, you might want to display more output information for debbuging and understanding the issue.

To do so, do the following:

  • for the "deploying debian11-x64-big environment" step, add --verbose-level 5
frontend:kadeploy3 -e debian11-x64-big --vlan `kavlan -V` --verbose-level 5
  • for the "securing script" step, set the environment variable GAN_DEBUG to 1 :
node:GAN_DEBUG=1 ./g5k-armor-node.py

Contact the technical team

If the script is not working properly or you have any question on the procedure, do not hesitate to contact the technical team BEFORE running any experiment with sensitive data: support-staff@lists.grid5000.fr. Please include all relevant details that could help the technical team to understand your problem (do NOT send any sensitive data by mail).

For instance, if the script g5k-armor-node.py is not working properly, please run it with the debug mode (see the previous section) and copy/paste potential error messages on the email you will send to the technical team.

Extending node reservation beyond normal limits

A limitation of this solution is the frequent need for setting up the node and importing the required data.

A way to mitigate this problem is to extend the reservations beyond what is normally allowed by Grid5000 policies (7 days max). However :

  • This add constraints for maintenances for the Grid5000 technical team
  • It is generally considered a bad practice to reserve resources (which prevents other users from using them) and then not use them

If really needed, this possibility should be discussed with the user's security correspondant and with the Grid5000 technical team. A prerequisite for this discussion is that the user clarify the hardware that could match its needs, using for example the Hardware page.