I have inherited a server which uses a GlusterFS to store backups. I have a issue where its running out of space, checking the volume status etc I can see that the disks/bricks are different sizes. The .172 host is fine for space but .173 is about to run out. Info is here: http://pastebin.com/raw.php?i=iaUCfMQm. Looking into it my only option is to use parted and resize sdb1 on .173, is there a easy way to get some space allocated to the .173 volume? These machines are on vmware and also if the disks were created using LVM I could of just extended the .173 partition but I am unable to do that. ------------- 192.168.4.173 ------------- Volume Name: backup Type: Replicate Volume ID: 0f414fb2-1ca3-47ca-93d2-f7d94371f5b5 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.4.172:/export/sdb1/brick, Brick2: 192.168.4.173:/export/sdb1/brick # df -h | grep sdb1 /dev/sdb1 200G 185G 16G 93% /export/sdb1 ------------- 192.168.4.172 ------------- Volume Name: backup Type: Replicate Volume ID: 0f414fb2-1ca3-47ca-93d2-f7d94371f5b5 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.4.172:/export/sdb1/brick, Brick2: 192.168.4.173:/export/sdb1/brick Options Reconfigured: geo-replication.indexing: on # df -h | grep sdb1 /dev/sdb1 400G 185G 216G 47% /export/sdb1 ------------- 192.168.4.100 ------------- # df -h | grep backup 192.168.4.172:/backup 200G 185G 16G 93% /backups