Skip to content
This repository has been archived by the owner on Mar 26, 2020. It is now read-only.

Commit

Permalink
Doc: detailed changes for user doc
Browse files Browse the repository at this point in the history
Has the first set of changes to improve a doc with differentiation
for user.

managing volumes, feature, known issues and trouble shooting are yet to be worked on.

Signed-off-by: Hari Gowtham <[email protected]>
  • Loading branch information
harigowtham committed Aug 17, 2018
1 parent 8d5b37d commit 79fb1e9
Show file tree
Hide file tree
Showing 3 changed files with 283 additions and 0 deletions.
104 changes: 104 additions & 0 deletions doc/managing-trusted-storage-pool.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
# Managing Trusted Storage Pools


### Overview

A trusted storage pool(TSP) is a trusted network of storage servers(peers). More about TSP [here](https://docs.gluster.org/en/latest/Administrator%20Guide/Storage%20Pools/)

The respective commands for glusterd2 can be found below.


- [Adding Servers](#adding-servers)
- [Listing Servers](#listing-servers)
- [Viewing Peer Status](#peer-status)
- [Removing Servers](#removing-servers)


<a name="adding-servers"></a>
### Adding Servers

To add a server to a TSP, do add peer from a server already in the pool.

# glustercli peer add <server>

For example, to add a new server(server2) to the cluster described above, probe it from one of the other servers:

server1# glustercli peer add server2
Peer add successful
+--------------------------------------+---------+-----------------------+-----------------------+
| ID | NAME | CLIENT ADDRESSES | PEER ADDRESSES |
+--------------------------------------+---------+-----------------------+-----------------------+
| fd0aaa07-9e5f-4265-b778-e49514874ca2 | server2 | 127.0.0.1:24007 | server2:24008 |
| | | 192.168.122.193:24007 | 192.168.122.193:24008 |
+--------------------------------------+---------+-----------------------+-----------------------+


Verify the peer status from the first server (server1):

server1# glustercli peer status
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
| ID | NAME | CLIENT ADDRESSES | PEER ADDRESSES | ONLINE | PID |
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
| d82734dc-57c0-44ef-a682-8b59c43d0cef | server1 | 127.0.0.1:24007 | 192.168.122.18:24008 | yes | 1269 |
| | | 192.168.122.18:24007 | | | |
| fd0aaa07-9e5f-4265-b778-e49514874ca2 | server2 | 127.0.0.1:24007 | 192.168.122.193:24008 | yes | 18657 |
| | | 192.168.122.193:24007 | | | |
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+


<a name="listing-servers"></a>
### Listing Servers

To list all nodes in the TSP:

server1# glustercli peer list
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
| ID | NAME | CLIENT ADDRESSES | PEER ADDRESSES | ONLINE | PID |
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
| d82734dc-57c0-44ef-a682-8b59c43d0cef | server1 | 127.0.0.1:24007 | 192.168.122.18:24008 | yes | 1269 |
| | | 192.168.122.18:24007 | | | |
| fd0aaa07-9e5f-4265-b778-e49514874ca2 | server2 | 127.0.0.1:24007 | 192.168.122.193:24008 | yes | 18657 |
| | | 192.168.122.193:24007 | | | |
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+


<a name="peer-status"></a>
### Viewing Peer Status

To view the status of the peers in the TSP:

server1# glustercli peer status
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
| ID | NAME | CLIENT ADDRESSES | PEER ADDRESSES | ONLINE | PID |
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
| d82734dc-57c0-44ef-a682-8b59c43d0cef | server1 | 127.0.0.1:24007 | 192.168.122.18:24008 | yes | 1269 |
| | | 192.168.122.18:24007 | | | |
| fd0aaa07-9e5f-4265-b778-e49514874ca2 | server2 | 127.0.0.1:24007 | 192.168.122.193:24008 | yes | 18657 |
| | | 192.168.122.193:24007 | | | |
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+


<a name="removing-servers"></a>
### Removing Servers

To remove a server from the TSP, run the following command from another server in the pool:

# gluster peer remove <peer-ID>

For example, to remove server4 from the trusted storage pool:

server1# glustercli peer remove fd0aaa07-9e5f-4265-b778-e49514874ca2
Peer remove success

***Note:*** For now remove peer works only with peerid which you can get from peer status.

Verify the peer status:

server1# glustercli peer status
+--------------------------------------+---------+----------------------+----------------------+--------+------+
| ID | NAME | CLIENT ADDRESSES | PEER ADDRESSES | ONLINE | PID |
+--------------------------------------+---------+----------------------+----------------------+--------+------+
| d82734dc-57c0-44ef-a682-8b59c43d0cef | server1 | 127.0.0.1:24007 | 192.168.122.18:24008 | yes | 1269 |
| | | 192.168.122.18:24007 | | | |
+--------------------------------------+---------+----------------------+----------------------+--------+------+

121 changes: 121 additions & 0 deletions doc/setting-up-volumes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
# Setting up GlusterFS Volumes

The commands that differ with GD2 are mentioned in this doc. For info about volume types and so on you can refer [here](https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/)

## Creating New Volumes:

### Creating Distributed Volumes

`# glustercli volume create --name <VOLNAME> <UUID1>:<brick1> .. <UUIDn>:<brickm> `

where n is the number of servers and m is the number of bricks. n and m can be same or m can be more than n.

For example, a four node distributed volume:

# glustercli volume create --name testvol server1:/export/brick1/data server2:/export/brick2/data server3:/export/brick3/data server4:/export/brick4/data
testvol Volume created successfully
Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99

### Creating Replicated Volumes

`# glustercli volume create --name <VOLNAME> --replica <count> <UUID1>:<brick1> .. <UUIDn>:<brickm>`

where n is the server count and m is the number of bricks.

For example, to create a replicated volume with two storage servers:

# glustercli volume create testvol server1:/exp1 server2:/exp2 --replica 2
testvol Volume created successfully
Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99

> **Note**:

> - GlusterD2 creates a replicate volume if more than one brick of a replica set is present on the same peer. For eg. a four node replicated volume where more than one brick of a replica set is present on the same peer.
>

> # glustercli volume create --name <VOLNAME> --replica 4 server1:/brick1 server1:/brick2 server2:/brick2 server3:/brick3
> <VOLNAME> Volume created successfully
> Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99

### Arbiter configuration for replica volumes

'# glustercli volume create <VOLNAME> --replica 2 --arbiter 1 <UUID1>:<brick1> <UUID2>:<brick2> <UUID3>:<brick3>'

>**Note:**
>
> 1) It is mentioned as replica 2 and not replica 3 even though there are 3 replicas (arbiter included).
> 2) The arbiter configuration for replica 3 can be used to create distributed-replicate volumes as well.
## Creating Distributed Replicated Volumes

`# glustercli volume create --name <VOLNAME> <UUID1>:<brick1> .. <UUIDn>:<brickm> --replica <count> `

where n is the number of servers and m is the number of bricks.

For example, a four node distributed (replicated) volume with a
two-way mirror:

# glustercli volume create --name testvol server1:/export/brick1/data server2:/export/brick2/data server1:/export/brick3/data server2:/export/brick4/data --replica 2
testvol Volume created successfully
Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99

For example, to create a six node distributed (replicated) volume
with a two-way mirror:

# glustercli volume create testvol server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 --replica 2
testvol Volume created successfully
Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99

> **Note**:

> - GlusterD2 creates a distribute replicate volume if more than one brick of a replica set is present on the same peer. For eg. for a four node distribute (replicated) volume where more than one brick of a replica set is present on the same peer.
>

> # glustercli volume create --name <volname> --replica 2 server1:/brick1 server1:/brick2 server2:/brick3 server2:/brick4
> <VOLNAME> Volume created successfully
> Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99


## Creating Dispersed Volumes

`# glustercli volume create --name <VOLNAME> --disperse <COUNT> <UUID1>:<brick1> .. <UUIDn>:<brickm>`

For example, a four node dispersed volume:

# glustercli volume create --name testvol --dispersed 4 server{1..4}:/export/brick/data
testvol Volume created successfully
Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99

For example, to create a six node dispersed volume:

# glustercli volume create testvol --disperse 6 server{1..6}:/export/brick/data
testvol Volume created successfully
Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99

The redundancy count is automatically set as 2 here.

## Creating Distributed Dispersed Volumes

`# glustercli volume create --name <VOLNAME> --disperse <COUNT> <UUID1>:<brick1> .. <UUIDn>:<brickm>`

For example, to create a six node dispersed volume:

# glustercli volume create testvol --disperse 3 server1:/export/brick/data{1..6}
testvol Volume created successfully
Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99


## Starting Volumes

You must start your volumes before you try to mount them.

**To start a volume**

- Start a volume:

`# glustercli volume start <VOLNAME>`

For example, to start test-volume:

# glustercli volume start testvol
Volume testvol started successfully
58 changes: 58 additions & 0 deletions doc/user_guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# User guide:

## Glusterd2 and glusterfs

This helps in understanding how glusterd2 (GD2) goes along with glusterfs.

### Glusterfs

GlusterFS is a scalable distributed network filesystem. More about gluster can be found [here] (https://docs.gluster.org/en/latest/).
***Note:*** An understanding of Glusterfs is necessary to use Glusterd2.

#### Glusterd

Glusterd is the management daemon for glusterfs. The glusterd serves as the Gluster elastic volume manager, overseeing glusterfs processes, and co-ordinating dynamic volume operations, such as adding and removing volumes across multiple storage servers non-disruptively.

Glusterd runs on all the servers. Commands are issued to glusterd using the cli which is a part of glusterd (can be issued on any server running glusterd).

#### Glusterd2

Glusterd2 is the next version of glusterd and its a maintained as a separate project for now.
It works along with glusterfs binaries and more about it will be explained in the installation.

Glusterd2 has its own cli which is different from glusterds'cli.

**Note:** There are other ways to communicate with glusterd2 which is explained in the architecture as well as the [configuring GD2]() section

## Installation

Note: Glusterd and gluster cli (the first version) are installed with the glusterfs. Glusterd2 has to be installed separately as of now.

## Configuring GD2

## Using GD2

### Basics Tasks

[Starting and stopping GD2](doc/managing-the-glusterd2-service.md)
[Managing Trusted Storage Pools](doc/managing-trusted-storage-pool.md)
[Setting Up Storage](https://docs.gluster.org/en/latest/Administrator%20Guide/setting-up-storage/)
[Setting Up Volumes](doc/setting-up-volumes.md)
[Setting Up Clients](https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Clients/)
[Managing GlusterFS Volumes](doc/managing-volumes.md)

### Features

[Geo-replication](doc/geo-replication.md)
[Snapshot](doc/snapshot.md)
[Bit-rot](doc/bitrot.md)
[Quota](doc/quota.md)


## Known Issues

**IMPORTANT:** Do not use glusterd and glusterd2 together. Do not file bugs when done so.

[Known issues](doc/known-issues.md)

## Trouble shooting

0 comments on commit 79fb1e9

Please sign in to comment.