Skip to main content

Fixes

Minor Versionm

by Thomas Kunath

Introduction

With the update to Silent Brick Version 2.57 an optimized S3 Implementation for S3 shares with object locking is available.

The usage of the new S3 Implementation is optional but highly recommended due to an optimization in performance and disk space usage. In order to use the new Implementation, a new share has to be set up.

This guide describes the steps of how to migrate existing S3 data to a new S3 share if needed.

    • In Silent Brick Version 2.57 the S3 server is updated to a new version. This version offers increased performance and a significantly reduced overhead for S3 shares with object locking.

    • Existing S3 shares with object locking can still be accessed and used. In order to utilise the performance and overhead advantages a new share has to be created. This guide describes how to migrate existing S3 data to such a new share.

    • Be aware of your Object Locks. When migrating data to a new share on the same volume you may end up in duplicated data since the original data can not be deleted due to Object Locks!

    • These instructions also copy deleted data and data intended for deletion.

  1. First of all check the available space on the volume
    • First of all check the available space on the volume

    • If there are no more 60% free, please contact FAST Support since data can not duplicated.

    • If 60% or more space is available, go to the next step in this manual

    • Keep in mind that existing data may underly an object locking retention and may not be deleteable within time.

    • Alternatively create a new Volume on an empty Silent Brick as copy target.

    • Examples are always listed with the individual commands. Two shares are used for this

    • The two shares in the example are located on the controller with the IP address 172.20.44.167 and are called

    • old_data_object_lock - for the existing S3 share with object locking. Port 9000

    • new_data_object lock - for the new share to which the data is to be transferred. Port 9001

    • Access data for both shares:

    • Access Key: abcd1234

    • Secret key: secretkey

  2. MinIO provides a client which will be used in this manual to copy the data If you are already using an add-on for S3 in Windows Explorer, the data can be copied using this tool. We recommend to write down the share settings and to rename the share and to alter the port. So a new share can be set up with the original settings.
    • MinIO provides a client which will be used in this manual to copy the data

    • If you are already using an add-on for S3 in Windows Explorer, the data can be copied using this tool.

    • We recommend to write down the share settings and to rename the share and to alter the port. So a new share can be set up with the original settings.

    • In this example, the existing S3 share old_data_object_lock is to be copied

    • Right-click on the old share to display the existing buckets using Manage Buckets

    • Write down the bucket names of the existing share

  3. Now create a new share on the target Volume. The share name cannot be changed later. Therefore, use a share name that can be easily customised at the source
    • Now create a new share on the target Volume.

    • The share name cannot be changed later. Therefore, use a share name that can be easily customised at the source

    • Use the settings of the old share to make a later connection as simple as possible

  4. Now generate the same buckets on the new share as on the old existing share Right-click on the new share.
    • Now generate the same buckets on the new share as on the old existing share

    • Right-click on the new share.

    • Klick on Manage Buckets

    • Use Create New Bucket

    • Now enter the bucket name and save the changes.

    • Do this for each bucket of the old share

  5. Download the MinIO client for your Operating System from the  Minio Download Page Install the Client and follow the following step (here described for Windows):
    • Download the MinIO client for your Operating System from the Minio Download Page

    • Install the Client and follow the following step (here described for Windows):

    • Now copy the mc.exe file to the path

    • C:\Users\<my_user>\mc\

    • This is necessary so that the mc config file located in this folder can be accessed.

    • To configure the client, we now need some data from the old S3 share. Click on the S3 share to open the share info

    • We now need the share name (old_data_object_lock) here, the IP address or machine name and the S3 port

    • Then we need the access key and the secret key

  6. This data must now be entered in the mc Config file. This file is located in Windows under  C:\Users\&lt;my_user&gt;\mc\config.json
    • This data must now be entered in the mc Config file. This file is located in Windows under

    • C:\Users\<my_user>\mc\config.json

    • Switch to C:\Users\<my_user>\mc\

    • Run the following command to fill the mc config,json with the needed data

    • mc config host add <S3 share name> https://<IP_address:port> <access_key> <secret_key> --api S3v4 --insecure

    • The --insecure key must be used as the Silent Brick systems use a self-signed SSL certificate by default. If you have installed a proprietary SSL certificate on the system, the --insecure extension does not need to be appended to the command.

    • In this example: mc config host add old_data_object_lock https://172.20.44.167:9001 abcd1234 secretkey --api S3v4 --insecure

    • If the command runs through cleanly, a confirmation appears

  7. We now have to do the same steps for the new share Click on the S3 share to open the share info
    • We now have to do the same steps for the new share

    • Click on the S3 share to open the share info

    • We now need the share name (new_data_object_lock) here, the IP address or machine name and the S3 port

    • Then we need the access key and the secret key

  8. The information for the new S3 share must now also be entered in the mc Config file. Switch to  C:\Users\&lt;my_user&gt;\mc\
    • The information for the new S3 share must now also be entered in the mc Config file.

    • Switch to C:\Users\<my_user>\mc\

    • Run the following command to fill the mc config,json with the needed data

    • mc config host add <S3 share name> https://<IP_address:port> <access_key> <secret_key> --api S3v4 --insecure

    • In this example: mc config host add new_data_object_lock https://172.20.44.167:9002 abcd1234 secretkey --api S3v4 --insecure

    • If the command runs through cleanly, a confirmation appears

  9. As the S3 share with object lock is basically a worm medium, the deleted data and retention times should also be transferred here. This can be ensured by a specification in the command. ''--replicate &quot;delete,delete-marker,existing-objects&quot; ''takes over all deleted elements intended for deletion and all active elements. If the deleted elements are not required, it is sufficient to use only --replicate &quot;existing-objects&quot;
    • As the S3 share with object lock is basically a worm medium, the deleted data and retention times should also be transferred here. This can be ensured by a specification in the command.

    • ''--replicate "delete,delete-marker,existing-objects" ''takes over all deleted elements intended for deletion and all active elements. If the deleted elements are not required, it is sufficient to use only --replicate "existing-objects"

    • The copy command for a bucket is:

    • mc replicate add <source S3 share/Bucket/> --remote-bucket <target S3 share/bucket/> --replicate "delete,delete-marker,existing-objects" --insecure

    • Here in the example:

    • mc replicate add old_data_object_lock/bucket1/ --remote-bucket new_data_object_lock/bucket1/ --replicate "delete,delete-marker,existing-objects" --insecure

    • If the command can be executed without errors, you will receive a confirmation.

    • A replication has now been set up between the old and the new S3 share. This is active until it is switched off again. This means that new incoming data is also transferred from the old to the new share.

  10. In the new S3 share, you can now see that it is no longer empty because data has been transferred. You can now use  mc replicate status to check whether replication is running between the two nodes. During data transfer, you can see how much data is currently cached and how quickly the data is being transferred. The command for this is: mc replicate status &lt;source S3 share/bucket&gt; -- insecure
    • In the new S3 share, you can now see that it is no longer empty because data has been transferred.

    • You can now use mc replicate status to check whether replication is running between the two nodes. During data transfer, you can see how much data is currently cached and how quickly the data is being transferred.

    • The command for this is: mc replicate status <source S3 share/bucket> -- insecure

    • Here again in the example: mc replicate status old_data_object_lock/bucket1 --insecure

    • After data has been transferred, the queued status is green again.

    • As this is an ongoing replication, the status can change several times, as a queue is built up and processed again and again as long as data is available. The process is complete when the number of replicated data no longer changes for a longer period of time.

  11. You can also query the data on the shares with mc ls
    • You can also query the data on the shares with mc ls

    • mc ls <S3 Share/Bucket> -- insecure

    • Here, for example, a query of the new S3 share

    • mc ls new_data_object_lock/bucket1 --insecure

  12. If you are sure that all files have been transferred, you can move your data source, e.g. Veeam, to the new share. If you have selected the same access data as for the old share, only the share name and the port need to be changed.
    • If you are sure that all files have been transferred, you can move your data source, e.g. Veeam, to the new share. If you have selected the same access data as for the old share, only the share name and the port need to be changed.

    • The replication between the two shares can then be unmounted again.

    • This is done with the following command:

    • mc replicate remove --all --force <source S3 share/bucket> -- insecure

    • Here in the example:

    • mc replicate remove --all --force old_data_object_lock/bucket1 --insecure

    • Now carry out the above steps for all your buckets. Pay attention to the filling quantity of your volume!

Finish Line

Thomas Kunath

Member since: 02/06/2017

1,847 Reputation

15 Guides authored

Team

FAST_Dozuki_Admins Member of FAST_Dozuki_Admins

8 Members

71 Guides authored

0 Comments

Add Comment

View Statistics:

Past 24 Hours: 2

Past 7 Days: 2

Past 30 Days: 24

All Time: 43

© 2019 FAST LTA | Alle Rechte vorbehalten | Änderungen jederzeit ohne vorherige Ankündigung möglich