Skip to content

SLF sensor data acquisition

The SLF sensor can perform snow density (if the snow is dry) and liquid water content (LWC) measurements (if the snow density is known, for example from snow cutter measurements). The data is stored int he internal memory of the device and is retrieved by connecting the device to the computer with a usb cable. This establishes a serial connection to retrieve the data:

  • turn the SLF sensor on, go to "Configuration" -> "TX over USB"
  • on Linux, run (as sudo/root): screen -L -Logfile ./slfsensor_{ISO date}.csv /dev/ttyACM0 115200 where {ISO date} is replaced by the ISO date of the measurements
  • on the SLF sensor, press "Start" in order to transmit the data

The whole content of the SLF sensor memory will be saved into the logfile. Please check that nothing is missing before deleting the data on the SLF sensor! If multiple days of field work are contained in the logfile, please split the file by days.

Snow surface roughness

Through the winter season 2022-2023, the FARO S120 TLS (Terrestrial Laser Scanner) operated at the field site of Weissfluhjoch, recording one scan per hour from 04-04-2023 until 25-06-2023. Below is a guide to process these scans from point clouds to Digital Surface Models. The procedure was created and made operational by Loic Brouet. Additional work to modify it and/or apply it to other scenarios was done by Francesca Carletti and Benjamin Walter.

TLS scans processing guide

The TLS scans processing uses the CloudCompare software. Please follow the steps below in order to process the scans.

Softwares to install

  • Python
  • CloudCompare (download here). Install the latest stable version, and don’t forget to confirm the installation of the Python plugin when asked. Even though the software is in principle available for Linux (or Mac), the Python plugin only works in Windows. The procedure described in the following has been developed in a Windows environment. More documentation for CloudCompare is available on the website
  • CloudCompare Python Plugin (description here, download here)

Procedure overview and preparation of the folder structure

Given the high number of scan and their size, processing them manually with the software would be too long, for this reason a batch process is used with the CloudCompare command line. There is a set of Python scripts that interact with the command line of CloudCompare, they are the following and must be used in the following order:

  1. get_orientation_matrix.py
  2. CC_transformation.py
  3. CC_segment.py
  4. CC_horizontal.py
  5. CC_rotation.py
  6. CC_CSF.py
  7. CC_bin2txt.py

Each Python script produces a .txt file containing command line commands for CloudCompare, one for each scan. These .txt files then need to be executed in the terminal, in administrator mode.

So, for step 1 for example, the procedure would be the following:

  • Run the Python code CC_transformation.py with Spyder, Pycharm or any other IDE in use;
  • Open CloudCompare (to check the results at each step, this doesn’t have to be done once it is certain that the procedure doesn’t produce wrong outputs);
  • Open a terminal in administrator mode (right click on Command Prompt > Run as an administrator) and move to the directory where CloudCompare has been installed: cd C:\Program Files\CloudCompare
  • Execute the .txt file that has been created with the Python script: C:\Users\brouetlo\Desktop\test_scanner\data\CC_transformation.txt

Repeat the above procedure for each step.

Before starting the procedure, it is wise to manually create the folders where each script will read/write the files produced with CloudCompare. The suggested names are the following (numbering helps not getting lost in the order of the procedure):

  1. 01_original_matrices
  2. 02_transformation_files
  3. 03_orientation_corr
  4. 04_segmented
  5. 05_horizontal
  6. 06_rotation
  7. 07_csf, 07_noise
  8. 08_csf_txt
  9. 09_interpolated
  10. 10_detrended

So, the process contains a few steps, with still a manual procedure to be done (creating folders, segmentation, data management), however it was the easiest way at the moment (September 2023). The new files are saved in new folders so that the original data is not lost in case of processing problem, but the data being quite heavy it should be deleted or moved step by step to save storage. When processing large amount of scans at the same time, it is better to delete the outputs of the previous steps once each step is concluded, and only keep the outputs from the last step, before proceeding with interpolation and detrending.

It is recommended to install a disk of 1TB at least for the processing, especially when analyzing several months/years of scans. For example, for the year 2023, almost 1800 scans were registered. The output from the last step in CloudCompare (CC_bin2txt.py, 08_csf_txt) produces a .txt file of 250 MB per scan, which means that the whole set of scans, processed to the last step, will occupy 450 GB (with interpolation and detrending still to be done).

Processing procedure

The processing procedure can be divided in three parts:

  1. Reading the FLS files and setting the same orientation and height to all the scans (get_orientation_matrix.py, CC_transformation.py);
  2. Define and clean the area of interest in the scans (CC_segment.py, CC_horizontal.py, CC_rotation.py);
  3. Obtaining a raster from the point cloud (CC_CSF.py, CC_bin2txt.py, interpolation.py, detrended.py).

For each step, remember to set the correct directories to each folder in each script (this needs to be done manually).

Part 1: read and prepare the FLS files

Extracting the orientation matrix: extract the TLS orientation matrix by reading the FLS file (Faro Laser Scanner format), for that open and run the code get_orientation_matrix.py with the Python plugin in CloudCompare, the orientation matrix is then saved in a txt file for each scan in the folder 01_original_matrices

Applying a transformation to the scans: derive and apply a transformation to the scans by using the command line of CloudCompare, for that run the code CC_transformation.py and execute CC_transformation.txt in the command line. For this, a reference matrix (mat_ref.txt) is applied so that each scan has the same origin.

Part 2: Segmentation of the scans
  • Open one scan from 03_orientation_corr in CloudCompare
  • Select and click the cloud in the DB tree
  • Click on Segment (scissors icon in the toolbox), then select a rectangular or polygonal selection and define an area on the scan and cut the scan
  • Take the corner coordinates by clicking Display > Show cursor coordinates and annotate them
  • Write them in CC_segment.py and run the code

Important notes concerning segmentation

  • The command is -CROP2D Z 4 -2.7 8.2 -2.7 6.3 1.6 6.3 1.6 8.2: “Z” refers to the plane that remains fixed, in this case the Z plane, because the cropping happens on the XY plane (the standard for CloudCompare). The order of the coordinates, for four points P1, P2, P3, P4 is the following: X(P1) Y(P1) X(P2) Y(P2) and so on. The number of vertices is declared with the number 4 after “Z”, in this case the selection is a rectangle
  • There are different ways to perform the crop, depending on which plane remains fixed. For 2023 scans, for example, after the transformation the scans are flipped on the XZ plane, so the crop has been done by keeping the Y plane fixed. In this case, the command is: -CROP2D Y 4 2547.133301 -6.878368 2542.331299 -6.743929 2542.177246 7.550035 2547.074219 7.690681, where the coordinates are Z(P1) X(P1) etc.
  • Check that the segmentation works for some scans from different dates before running the batch process on a big amount of scans

Making the surface horizontal (i.e. project it on the xy plane):

  • It is necessary for the Cloud Surface Filter to perform well
  • Run CC_horizontal.py and execute CC_horizontal.txt in the command line

Rotating the scan:

  • It has to fit well in the CloudCompare frame (i.e. the yellow frame)
  • This is done to avoid trimming the scan after the interpolation
  • Click on the cloud in the DB tree, then Edit > Apply transformation > Axis/Angle > Rotation angle
  • Try several values on different scans until the scan fits in the yellow frame
  • Copy the transformation matrix in a .txt file called rotation_matrix.txt
  • Run CC_rotation.py and execute CC_rotation.txt in the command line
Part 3: cloth surface filtering
  • Filter out the points lying out of the surface with the CSF (Cloth Surface Filter) filter
  • The parameters that are already set are the ones that fit the best for the snow surface, so they don’t have to be changed
  • Run CC_CSF.py and execute CC_3_CSF.txt in the command line

From .bin to .txt:

  • In this way, they are readable in Python
  • Run CC_bin2txt.py and execute CC_bin2txt.txt in the command line

From point cloud to raster:

  • Interpolate the point cloud to obtain a grid (raster)
  • Run the code interpolation.py with Python
  • The parameter “cell_size” has to be adjusted as a function of the distance between the points, but for the 2023 scans it was maintained as it was (0.004 m)

Detrending the snow surface model:

  • In this way, we obtain the roughness and the small scale changes
  • Run the code detrended.py with Python
Optional fine tunning
  • outliers_filter.py: filters the noise/outliers in the detrended DSM
  • npy2csv.py: saves the grid in .tiff format (instead of .npy)
  • plots2video_sav.py: plots the scans pictures and saves a video