I Solved My iPhone Storage Problem Once and For All
Another iPhone update, another "to install this update, you need to free up storage space."
As someone who pays for iCloud, I felt like I was going to lose it. I know iCloud only acts as a syncing engine, but realistically there's nothing stopping Apple from making it a durable backup service as well. This would make it a million times easier for users to manage their storage.
Rant over. Below is the story of how I decided to take my data needs into my own hands.
The Solution
I already have a Digital Ocean bucket used for my DB backups as well as for serving my Rocket League highlights. Why not take this into my own hands? There are other apps that can be used to back up my photos, but none of them fit my exact needs of granular folder syncing and simple deletion, and more importantly, none of them are free. Since I'm already paying $5/month for DO to host my web server, db, and montages, I figured why not use their new Cold Storage offering? It won't cost me any extra since I'm paying the $5 flat fee and I don't have a lot of data.
So the solution I crafted up is the following:
- A Go CLI that wakes up, walks the Photos directory, and queues media for upload to DO's cloud storage.
- The Go program uses up to 8 CPUs to process file uploads, however, it's worth noting that we're not bounded by CPU but rather network I/O.
Quick Implementation Notes
The overall flow of my program looks like this:
- Walk the Photos directory, collecting all the paths to every photo
- Using the list of paths, place each path into a queue
- A pool of 8 workers pulls from the queue, attempts to upload the file, and repeat, until the queue is empty
- Successfully uploaded filepaths are written to another queue, these filepaths are moved to their own iPhone album which I can select all + delete to free up storage en-masse
The Worker
The worker has a messages queue (channel), which it reads from, and a successes queue, which it sends to.
The worker pool struct's constructor initializes 8 goroutines who all pull from the messages channel. After they've successfully uploaded a file, they queue it to be moved to the "staging" album, where I manually select the whole thing and delete it all at once.
The worker looks like this:
1package pool
2
3import (
4 "context"
5 "fmt"
6 "log"
7 "os"
8 "path/filepath"
9 "sync"
10
11 "github.com/thornhall/backup/internal/client"
12)
13
14type Pool struct {
15 wg *sync.WaitGroup
16 successes chan<- string
17 client *client.SpaceClient
18}
19
20type Message struct {
21 Path string
22 Hidden bool
23}
24
25// Uploads filepaths passed into paths. Writes successfully uploaded paths to successes.
26func New(ctx context.Context, size int, messages <-chan Message, successes chan<- string, spaceClient *client.SpaceClient) *Pool {
27 var wg sync.WaitGroup
28 wg.Add(size)
29 p := &Pool{
30 wg: &wg,
31 successes: successes,
32 client: spaceClient,
33 }
34 for range size {
35 go p.worker(ctx, messages, successes, &wg)
36 }
37 return p
38}
39
40// Blocks until all the workers are finished. Closes successes channel.
41func (p *Pool) Close() {
42 p.wg.Wait()
43 fmt.Println("closing successes channel..")
44 close(p.successes)
45}
46
47func (p *Pool) worker(ctx context.Context, messages <-chan Message, successes chan<- string, wg *sync.WaitGroup) {
48 defer wg.Done()
49 for {
50 select {
51 case msg, ok := <-messages:
52 if !ok {
53 log.Println("upload worker shutting down...")
54 return
55 }
56 err := uploadWithRetries(ctx, p.client, msg)
57 if err != nil {
58 log.Printf("error uploading file %s: %v\n", filepath.Base(msg.Path), err)
59 } else {
60 log.Printf("%s uploaded successfully.\n", filepath.Base(msg.Path))
61 successes <- msg.Path
62 }
63 case <-ctx.Done():
64 return
65 }
66 }
67}
68
69func uploadWithRetries(ctx context.Context, client *client.SpaceClient, msg Message) error {
70 file, err := os.Open(msg.Path)
71 if err != nil {
72 return fmt.Errorf("could not open file: %w", err)
73 }
74 defer file.Close()
75
76 stat, err := file.Stat()
77 if err != nil {
78 return err
79 }
80 if stat.IsDir() {
81 return fmt.Errorf("path is a directory")
82 }
83
84 log.Printf("Uploading file: %s...\n", filepath.Base(msg.Path))
85 err = client.UploadFile(ctx, filepath.Base(msg.Path), file, msg.Hidden)
86 return err
87}
88
Conclusion
I strongly believe there is valid reason to write custom programs like this, even if just for just myself. While existing solutions exist, they lack granularity and more importantly, they're not free. Using my own infrastructure, I can custom-tailor a solution to my needs without paying more than what I'm already paying or over-engineering it to try to generalize for everyone, and that's really powerful.