While searching the web for a quick way to clone all repositories for a given github organization with little effort came across this snippet on GIST by the user u/caniszczyk which reads as:

curl -s https://api.github.com/orgs/twitter/repos?per_page=200 | ruby -rubygems -e 'require "json"; JSON.load(STDIN.read).each { |repo| %x[git clone #{repo["ssh_url"]} ]}'

It worked well, although, the sequential nature of this snippet caused it to take a considerable amount of time to complete. This prompted me to think about how could I parallelize it and because I favour Python, I re-wrote it as a python oneliner instead, so my version of the script above, as translated to curl/python/xargs can be found below.

The magic maker here is the xargs -P command which is used to control how many parallel processes are spawned, adjust the number according to your preference/bandwidth

curl -s https://api.github.com/orgs/xenserver/repos?per_page=200 | python -c 'import sys,json; print "\n".join(map(lambda x: x.get("ssh_url"), json.loads(sys.stdin.read())))' | xargs -L1 -P 10 git clone