Jekyll2023-08-04T02:59:45+02:00https://0xcc.re/feed.xml0xcc.reSimple hacker blog :)Mikal VillaCheckout pull requests from github locally2022-03-20T17:45:00+01:002022-03-20T17:45:00+01:00https://0xcc.re/2022/03/20/checkout-pull-requests-from-github-locally<p>Sometimes for various reasons I want to get one or more pull requests from github to my local copy of the repository. In the beginning I went as far as checking out the repository of whoever wrote the pull request, if small enough copy paste when it’s only for quick tests. Luckly that’s past, there is better ways of doing it!</p>
<p>In <code class="language-plaintext highlighter-rouge">.gitconfig</code> add the following:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[remote "origin"]
fetch = +refs/pull/*/head:refs/remotes/origin/pr/*
</code></pre></div></div>
<p>Testing with a random golang project found on github the results are like this (I checked it out some days ago, and is now updaing my local copy):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>❯ git fetch
remote: Enumerating objects: 66, done.
remote: Counting objects: 100% (66/66), done.
remote: Compressing objects: 100% (21/21), done.
remote: Total 66 (delta 48), reused 53 (delta 45), pack-reused 0
Unpacking objects: 100% (66/66), 334.02 KiB | 195.00 KiB/s, done.
From https://github.com/AdguardTeam/AdGuardHome
dc0d081b..77858586 master -> origin/master
* [new ref] refs/pull/4387/head -> origin/pr/4387
* [new ref] refs/pull/4400/head -> origin/pr/4400
* [new ref] refs/pull/4403/head -> origin/pr/4403
* [new ref] refs/pull/4406/head -> origin/pr/4406
* [new ref] refs/pull/4411/head -> origin/pr/4411
* [new branch] qq-rule -> origin/qq-rule
</code></pre></div></div>
<p>Then you can just do <code class="language-plaintext highlighter-rouge">git checkout origin/pr/4387</code> and there it is, at localhost :)</p>Mikal VillaSometimes for various reasons I want to get one or more pull requests from github to my local copy of the repository. In the beginning I went as far as checking out the repository of whoever wrote the pull request, if small enough copy paste when it’s only for quick tests. Luckly that’s past, there is better ways of doing it!Dangerous toys: Anything to ed25519 (SSH Keys)2022-02-01T14:00:00+01:002022-02-01T14:00:00+01:00https://0xcc.re/2022/02/01/dangerous-toys-anything-to-ed25519-ssh-keys<p><strong>Disclaimer:</strong> I’m not a professional cryptographer or anything like that, I’ve found it interesting to dig deep into ECC and alike crypto, use at own risk and such :)</p>
<p><strong>BIG NOTE:</strong> After much discussion in communities it seems like a general misunderstanding from many people where they think this is a “password to key” tool.
It is not, if you use it for that purpose you’re stupid. This is for allowing you to generate more pleasant seeds for your private key, like a 64char HEX string. Rather than having to fight the keyboard typing in a base64 key when you’re already super stressed because your infrastructure is down and you’re in the middle of a disaster recovery. (Ref bellow) Wouldn’t you love enter those A’s and beg you counted right?</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>—–BEGIN OPENSSH PRIVATE KEY—– b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtz
</code></pre></div></div>
<p><strong>Extra note:</strong> Some awesome users at <a href="https://lobste.rs/s/aiysqb/dangerous_toys_anything_ed25519_ssh_keys">lobste.rs</a> discovered some newbie mistakes I did way back when I coded the initial code and was in the beginning of crypto learning and I’ve afterwards patched the code and been carefully read over hunting any potential issues. Special thanks to lobsters’s user <a href="https://lobste.rs/u/zeebo">zeebo</a>!</p>
<p>A couple of years back I wrote a Golang binary which took whatever it got from <code class="language-plaintext highlighter-rouge">stdin</code> and generated a ready to use openssh keypair (a <code class="language-plaintext highlighter-rouge">id_ed25519</code> and a <code class="language-plaintext highlighter-rouge">id_ed25519.pub</code> with standard names). I then forgot about it, didn’t even put it into git. However back in August last year we had a case at work where
such would be perfect, so I managed to dig it back up and release it onto github for anyone to use.</p>
<p>This tool/toy (whatever you wanna name it) can be very practical but also extremly dangerous if used lightly without thinking about the security. It might just open your production environment to the wrong people if you don’t use good enough seeds. And just so you know, the human brain isn’t really equiped with good entropy so my adivce is to never use a key with something you just figured out of your own. Use a random strong key - the primary usage, at least from my side
is backup keys that can be written down on a paper and stored offline somewhere safe.</p>
<p>However, I can’t stress enough how crazy bad of a idea it is to use weak keys, like the ed25519 key you get from <code class="language-plaintext highlighter-rouge">echo hello | anything2ed25519</code> cause everyone can easily guess that kind of weak key. Use a dictionary and you have plenty of weak keys to test out if you’re the bad guy. So be sure to use a lot stronger seeds.</p>
<p>If any key from this tool should end up in anyone’s production environment, you should strictly follow the cryptocurrency guidelines about mnemonics keys. Nothing less than 24 random words, or some cryptographic strong random data of 32 bytes or more.</p>
<p>The trick is that the seed is put into a SHA256 hash, which results in a valid twisted edwards 255^19 key. So in theory you can even generate your keys from
a binary file like a <code class="language-plaintext highlighter-rouge">jpg</code> or <code class="language-plaintext highlighter-rouge">dll</code> even I wouldn’t recommend it as bigger files are hard to keep exactly the same bits over time, specially if you’re maybe
passing those files over different filesystems and operting systems and other scary stuff that might change something outside of your control and you won’t be able to recover your key.</p>
<p><img src="/assets/images/ed25519.png" alt="Ed25519" /></p>
<h2 id="recommendations">Recommendations</h2>
<p>If cryptography and security isn’t your strong side, you should probably skip using this tool. It will probably do more harm than good..</p>
<h2 id="demo-time">Demo time</h2>
<p>In my “demo” I use just a random seed, but the way this tool works is that it would produce the exact same private and public key, given the
input is exactly the same both times.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>anything2ed25519 on main [?] via 🐹 v1.17.2
❯ openssl rand -base64 64 | tr -d '\n' | ./anything2ed25519-darwin-amd64.macho
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtz
c2gtZWQyNTUxOQAAACB2LyhtKUdHZGdnpktmRSsBf2zW1/zorATpx2yPdqdSUQAA
AIiaywRCmssEQgAAAAtzc2gtZWQyNTUxOQAAACB2LyhtKUdHZGdnpktmRSsBf2zW
1/zorATpx2yPdqdSUQAAAEA0YjU5ODZiZTk4MTY2MzA4Y2NiZGMwMzU1NTVkYjc3
ZHYvKG0pR0dkZ2emS2ZFKwF/bNbX/OisBOnHbI92p1JRAAAAAAECAwQF
-----END OPENSSH PRIVATE KEY-----
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIteEylauQyG0WXHPv18+u7B+yBURUMVUf9+F3ogil86
anything2ed25519 on main [?] via 🐹 v1.17.2
❯ ls id_ed25519*
id_ed25519 id_ed25519.pub
</code></pre></div></div>
<h2 id="sources">Sources</h2>
<p>The repo is found at <a href="https://github.com/mikalv/anything2ed25519">https://github.com/mikalv/anything2ed25519</a></p>Mikal VillaDisclaimer: I’m not a professional cryptographer or anything like that, I’ve found it interesting to dig deep into ECC and alike crypto, use at own risk and such :)I2P outproxies: How to and General information2019-04-07T09:05:18+02:002019-04-07T09:05:18+02:00https://0xcc.re/2019/04/07/i2p-outproxies-howto-and-general-information<h2 id="what-is-a-outproxy">What is a outproxy?</h2>
<p>A outproxy would be the same as a Tor exit node for http(s). It’s a way to allow your tunnel clients to exit via your router to the “clearnet”.</p>
<h2 id="background-history">Background History</h2>
<p>As you might not know, I2P was designed to be an internal network, which where outproxies weren’t used since all traffic where expected to remain inside the I2P network. Compared to Tor, I2P focused on internal hidden services rather than exits. For a long time it only existed 1-2 outproxies which comes with the default I2P config on new installations. It’s first now in recent time that I2P has gotten more focus on outproxies and their service.</p>
<p>The two recent outproxies (or one, they both exit to the same backend service) is <code class="language-plaintext highlighter-rouge">false.i2p</code> and <code class="language-plaintext highlighter-rouge">outproxy-tor.meeh.i2p</code> which has been ran since ~2012-2013ish.</p>
<h2 id="for-potential-outproxy-providers">For potential outproxy providers</h2>
<h3 id="how-much-traffic-can-i-expect">How much traffic can I expect?</h3>
<p>How much can you handle? :) It’s rate limit configuration in the hidden service manager where you can limit the amount of requests from clients on your outproxy tunnel, the section is named “Server Throttling”. Use that to limit the traffic to a outproxy.</p>
<p>![I2P_Router_Console_-<em>Hidden_Services_Manager</em>-<em>Server_Throttling](/content/images/2019/04/I2P_Router_Console</em>-<em>Hidden_Services_Manager</em>-_Server_Throttling.png)</p>
<h3 id="what-kind-of-traffic-can-i-expect">What kind of traffic can I expect?</h3>
<p>Nothing special that we haven’t seen from before. If you ever ran a Tor outproxy it’s about the same just less scriptkiddies and shit.</p>
<p>Some common domains that are requested at a daily basis;</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bing.com
digicert.com
facebook.com
gmail.com
gstatic.com
hotmail.com
instagram.com
mail.ru
microsoft.com
netflix.com
play.google.com
pornhub.com
startpage.com
steampowered.com
vimeo.com
yahoo.com
youtube.com
</code></pre></div></div>
<h3 id="tldr---howto-setup-an-outproxy">TL;DR - Howto setup an outproxy</h3>
<ol>
<li>Create a server tunnel on your I2P outproxy router</li>
<li>Setup privoxy/squid or another http proxy software of your choise</li>
<li>Point the server tunnel to your http proxy port</li>
<li>Increase the tunnel count, in my setup I have <code class="language-plaintext highlighter-rouge">16</code> and <code class="language-plaintext highlighter-rouge">3</code> backup tunnels</li>
</ol>
<p>![I2P_Router_Console_-<em>Hidden_Services_Manager](/content/images/2019/04/I2P_Router_Console</em>-_Hidden_Services_Manager.png)</p>
<p>The <code class="language-plaintext highlighter-rouge">i2ptunnel.config</code> should then look something similar to this;</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>tunnel.10.description=false.i2p outproxy
tunnel.10.name=false.i2p
tunnel.10.option.enableUniqueLocal=true
tunnel.10.option.i2cp.destination.sigType=EdDSA_SHA512_Ed25519
tunnel.10.option.i2cp.enableAccessList=false
tunnel.10.option.i2cp.enableBlackList=false
tunnel.10.option.i2cp.encryptLeaseSet=false
tunnel.10.option.i2cp.reduceIdleTime=1200000
tunnel.10.option.i2cp.reduceOnIdle=false
tunnel.10.option.i2cp.reduceQuantity=1
tunnel.10.option.i2p.streaming.connectDelay=0
tunnel.10.option.i2p.streaming.limitsManuallySet=true
tunnel.10.option.i2p.streaming.maxConcurrentStreams=0
tunnel.10.option.i2p.streaming.maxConnsPerDay=0
tunnel.10.option.i2p.streaming.maxConnsPerHour=0
tunnel.10.option.i2p.streaming.maxConnsPerMinute=128
tunnel.10.option.i2p.streaming.maxTotalConnsPerDay=0
tunnel.10.option.i2p.streaming.maxTotalConnsPerHour=0
tunnel.10.option.i2p.streaming.maxTotalConnsPerMinute=0
tunnel.10.option.inbound.backupQuantity=3
tunnel.10.option.inbound.length=1
tunnel.10.option.inbound.lengthVariance=0
tunnel.10.option.inbound.nickname=false.i2p
tunnel.10.option.inbound.quantity=16
tunnel.10.option.outbound.backupQuantity=3
tunnel.10.option.outbound.length=1
tunnel.10.option.outbound.lengthVariance=0
tunnel.10.option.outbound.nickname=false.i2p
tunnel.10.option.outbound.quantity=16
tunnel.10.option.rejectInproxy=false
tunnel.10.option.rejectReferer=false
tunnel.10.option.rejectUserAgents=false
tunnel.10.option.shouldBundleReplyInfo=true
tunnel.10.option.useSSL=false
tunnel.10.privKeyFile=i2ptunnel-false.i2p.priv.dat
tunnel.10.startOnLoad=true
tunnel.10.targetHost=127.0.0.1
tunnel.10.targetPort=3128
tunnel.10.type=server
</code></pre></div></div>
<h3 id="router-tweaking">Router tweaking</h3>
<p>If you want to run a high-traffic outproxy you should also tweak your <code class="language-plaintext highlighter-rouge">router.config</code>. You can either edit the file on disk while the router is stopped, or via the advanced config page in the webconsole.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>i2np.ntcp.maxConnections=4096
i2np.udp.maxConnections=4096
router.maxParticipatingTunnels=512
</code></pre></div></div>
<p>Note that the <code class="language-plaintext highlighter-rouge">maxParticipatingTunnels</code> is set to a lower number for a reason, and it’s simple - if you want to have high-traffic on your outproxy - let the router focus on the service rather than forwarding other router traffic which isn’t relevant for your outproxy service.</p>
<h3 id="outproxy-service-announcement">Outproxy service announcement</h3>
<p>Currently it’s no automatic way for I2P routers to discover new outproxies, so after you have setup your I2P outproxy service you would need to announce the service, for example at the I2P forum, the IRC network, Reddit, etc.</p>
<h3 id="how-do-i-setup-my-router-to-use-a-outproxy">How do I setup my router to use a outproxy?</h3>
<p>Outproxies can be configured under your HTTP Proxy tunnel in the Hidden Service Manager webui. Look at the image bellow. - Note that instead of only listing one outproxy like mine setup, you can have a comma separated list to use multiple outproxies for your router.
![I2P_Router_Console_-<em>Hidden_Services_Manager</em>-<em>Client2](/content/images/2019/04/I2P_Router_Console</em>-<em>Hidden_Services_Manager</em>-_Client2.png)</p>Mikal VillaWhat is a outproxy?How to run 128 (testnet) I2P routers in multiple subnets on a single Linux system.2018-10-16T13:36:20+02:002018-10-16T13:36:20+02:00https://0xcc.re/2018/10/16/howto-run-128-i2p-routers-in-multiple-subnets-on-a-single-linux-system<p>For a long time, at least internally it’s been talking about the need of a testnet for <a href="https://geti2p.net/en/">I2P</a>. Testing in production isn’t trivial :)</p>
<p>I’ve been on and off the mission in quiet for myself for a while now, but finally completed something worty publishing, in other words a working testnet setup/teardown script. When I was thinking about the case two technologies comes to mind which might help getting the testnet idea an reality is <a href="http://man7.org/linux/man-pages/man8/ip-netns.8.html">cgroups(namespaces)</a> in Linux and <a href="http://man.openbsd.org/rdomain.4">rdomains</a> in OpenBSD - both of them for network isolation and virtualization. It’s probably more as well, but it’s the ones I know of by the time of writing.</p>
<p>Currently I’m running a 128x node testnet divided on ~7x subnets and 8x virtual switches all on the same Linux kernel/machine.</p>
<p>Github link to my script files is <a href="https://github.com/mikalv/i2p-testnet">here</a>.</p>
<p>Most likely, this is the first post in a series of the topic I2P testnet.</p>
<p><strong>Some quick FAQs/General information:</strong></p>
<ul>
<li>Why didn’t I use docker?
<ul>
<li>Bloated in this user-case.</li>
<li>Faster to update one binary, then building an container.</li>
<li>Only the network part needs to be isolated as long as routers has custom data/config directory.</li>
</ul>
</li>
<li>Kubernetes support?
<ul>
<li>Chill, it’s 0.0.1 alpha.</li>
<li>It’s coming :)</li>
<li>Yes, it means it has to support docker/containers as well in the end.</li>
</ul>
</li>
<li>Information collection?
<ul>
<li>TBA.</li>
<li>For now, manual curl/browser to namespaces.</li>
</ul>
</li>
<li>Why i2pd and not java i2p?
<ul>
<li>Bigger network for less resources.</li>
<li>Will of course support java quite soon.</li>
</ul>
</li>
<li>I’m getting errors, how can I debug this shit?
<ul>
<li>Change the shebang of the script from <code class="language-plaintext highlighter-rouge">#!/usr/bin/env bash</code> to <code class="language-plaintext highlighter-rouge">#!/usr/bin/env bash -x</code></li>
</ul>
</li>
<li>Script hacking notes at the bottom of the post.</li>
</ul>
<p><strong>What are the dependencies for now?</strong></p>
<ul>
<li>i2p-tools (reseed server) needs to be resolvable in $PATH
<ul>
<li>Built in golang, you’ll need it to compile.</li>
</ul>
</li>
<li>i2pd needs to be resolvable in $PATH
<ul>
<li>libboost.</li>
<li>openssl.</li>
<li>(optional) libminiupnpc.</li>
</ul>
</li>
<li>Linux kernel with cgroups enabled.
<ul>
<li>I doubt you got a linux system not supporting it, don’t worry.</li>
</ul>
</li>
</ul>
<p><strong>What are the steps to create a testnet?</strong></p>
<ul>
<li>We’ll need some initial nodes.</li>
<li>And a reseed webserver to host the routerInfo’s for them.</li>
<li>I’ve added mine routerinfo’s in a tar.xz in the github repo if you want to skip this part.</li>
</ul>
<ol>
<li>Boot up the network.</li>
<li>From the git repo, use <code class="language-plaintext highlighter-rouge">collect-routerinfos.sh</code> to move the created router.info files into the reseed/netDb directory with a routerInfo-(whatever).dat format.</li>
<li>Restart the network so it can bootstrap.</li>
<li>Wait for the routers to connect and create tunnels.</li>
<li>Enjoy a coffee. :)</li>
</ol>
<p><strong>Usage?</strong></p>
<ul>
<li>Use <code class="language-plaintext highlighter-rouge">./i2ptestnet.sh -c testnet.conf</code> to only clean (teardown the testnet and clean the host)</li>
<li>Use <code class="language-plaintext highlighter-rouge">./i2ptestnet.sh testnet.conf</code> to initialize a testnet (no need to run the clean command up front since it will check & remove any existing related setup before it starts)</li>
</ul>
<p><strong>So, how does the script work?</strong></p>
<p>First of, it’s a clean script, meaning it can clean up after itself, and it also ensures to bring all down before up, to avoid any hanging-leftovers from earlier runs to ruin the current.</p>
<p>It parses a config file in a format of an simple DSL. It gives you the posibility to split your testnet into several subnets instead of running all of your testnodes in the same which is lame - and it makes it less hassle to connect with a kubernetes cluster in big scale. :)</p>
<p>I cut some of the <a href="https://en.wikipedia.org/wiki/Domain-specific_language">DSL</a> out bellow, basically I’ve defined several networks with a “switch” type to link them, and “host” types for defining “fake i2p routers” - both types represent a network namespace. It also support running commands in a network namespace via the “exec” subcommand of either “switch” or “host”.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#### Special cases ####
host fw
dev veth0 10.0.0.254/24
dev veth1 10.1.1.254/24
dev vbr0eth2
dev vbr0eth3
dev vbr1eth4
dev vbr1eth5
dev vbr2eth6
dev vbr2eth7
dev vbr3eth8
dev vbr3eth9
dev vbr4eth10
dev vbr4eth11
bridgedev vbr0 vbr0eth2 vbr0eth3 10.23.23.254/24
bridgedev vbr1 vbr1eth4 vbr1eth5 10.45.45.254/24
bridgedev vbr2 vbr2eth6 vbr2eth7 192.168.24.1/24
bridgedev vbr3 vbr3eth8 vbr3eth9 10.78.17.254/24
bridgedev vbr4 vbr4eth10 vbr4eth11 10.78.100.254/24
route default via 10.1.1.253
exec echo 1 > /proc/sys/net/ipv4/ip_forward
i2preseed
host gw
dev veth0 fw/veth1 10.1.1.253/24
dev veth1 192.168.1.254/24
dev veth2 192.168.2.254/24
route default via 10.1.1.254
exec echo 1 > /proc/sys/net/ipv4/ip_forward
#### Normal Hosts ####
host host01
dev veth0 10.0.0.1/24
route default via 10.0.0.254
i2pdnode 15
host host02
dev veth0 10.0.0.2/24
route default via 10.0.0.254
i2pdnode 16
.....
host host126test
dev veth0 10.78.100.26/24
route default via 10.78.100.254
i2pdnode 126
host host127test
dev veth0 10.78.100.27/24
route default via 10.78.100.254
i2pdnode 127
host host128test
dev veth0 10.78.100.28/24
route default via 10.78.100.254
i2pdnode 128
###### Switches #########
switch sw0
dev d01 fw/veth0
dev d02 host01/veth0
dev d03 host02/veth0
dev d04 fw/vbr2eth6
dev d05 fw/vbr3eth8
dev d06 fw/vbr4eth10
switch sw2
dev d01 fw/vbr0eth2
dev d02 host21/veth0
dev d03 host22/veth0
dev d04 host23/veth0
dev d05 host24/veth0
switch sw3
dev d01 fw/vbr0eth3
dev d02 host31/veth0
dev d03 host32/veth0
dev d04 host33/veth0
dev d05 host34/veth0
.....
</code></pre></div></div>
<p>The script itself is “just” about 500 lines of code, which also is pasted bellow.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#!/usr/bin/env bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
DATADIR=${DATADIR:-"$DIR/data"}
PIDDIR=${PIDDIR:-"$DATADIR/pids/"}
LOGDIR=${LOGDIR:-"$DATADIR/logs/"}
RESEEDSERVER=${RESEEDSERVER:-"https://192.168.24.1:8443/"}
RESEEDSERVER_IP=${RESEEDSERVER_IP:-"192.168.24.1"}
RESEED_DIR="$DATADIR/reseed"
NETDB_DIR="$DATADIR/reseed/netDb"
SIGNER=${SIGNER:-"meeh@mail.i2p"}
SIGNER_FNAME=${SIGNER_FNAME:-"meeh_at_mail.i2p"}
mkdir -p $DATADIR $PIDDIR $LOGDIR
# Tmux session for all routers
tmux new-session -d -s routers top
I2PDWRAPPER=$(cat <<EOF
#!/usr/bin/env bash\n
ip netns exec \$2 i2pd \
--datadir=$DATADIR/testnode\$1 \
--log stdout \
--floodfill \
--ipv4=1 \
--host=\$3 \
--ifname4=veth0 \
--pidfile=$PIDDIR/router\$1.pid \
--port=4764 \
--reseed.urls=$RESEEDSERVER | tee -a $LOGDIR/router\$1.log >> $LOGDIR/all-routers.log &
EOF
)
for i in `seq 1 128`; do
mkdir -p $DATADIR/testnode$i/certificates/reseed
cp $DATADIR/reseed/${SIGNER_FNAME}.crt $DATADIR/testnode$i/certificates/reseed/
echo -e $I2PDWRAPPER > $DATADIR/testnode$i/i2pdwrapper.sh
done
#echo $I2PDWRAPPER
#exit 0
haderror=""
if [ "`id -r -u`" != "0" ]
then
echo "Must be root"
echo ""
haderror="Y"
fi
if [ "$haderror" -o $# -lt 1 ]
then
echo "Usage: sudo ./i2ptestnet.sh [-c] setup-file"
exit 1
fi
if [ "$1" = "-c" ]
then
cleanonly=Y
shift
else
cleanonly=
fi
setup="$1"
shift
setupbase="$(basename $setup)"
errline=""
error=""
if ! MYTMP="`mktemp -d -t i2ptestnet-XXXXXX`"
then
echo >&2
echo >&2
echo >&2 "Cannot create temporary directory."
echo >&2
exit 1
fi
myexit() {
status=$?
if [ "$error" != "" ]
then
echo "$setupbase: line $errline: $error"
fi
rm -rf $MYTMP
exit $status
}
trap myexit INT
trap myexit HUP
trap myexit 0
CURDIR=`pwd`/
export CURDIR
set -e
mkdir $MYTMP/setup
(echo "cd $CURDIR"; sed = "$setup" | sed -e 'N;s/\n/\t/' -e 's/^/lineno=/' -e '/exec/s/[<>|&]/\\&/g' -e '/i2preseed/s/[<>|&]/\\&/g' -e '/i2pdnode/s/[<>|&]/\\&/g') > $MYTMP/setup/$setupbase
mkdir $MYTMP/ns
mkdir $MYTMP/runtime-lines
current_name=
create_namespace() {
errline=$lineno
local type="$1"
current_name="$2"
NSTMP=$MYTMP/ns/$current_name
if [ -d $NSTMP ]
then
error="$current_name: $(cat $NSTMP/type) already defined"
return 1
fi
mkdir $NSTMP
mkdir $NSTMP/devices
mkdir $NSTMP/devicepairs
echo $type > $NSTMP/type
echo 0 > $NSTMP/forward
> $NSTMP/routes
> $NSTMP/devlist
> $NSTMP/pairlist
> $NSTMP/bridgelist
echo $current_name >> $MYTMP/nslist
echo $errline > $MYTMP/runtime-lines/$current_name
}
host() {
errline=$lineno
create_namespace host "$1"
}
switch() {
errline=$lineno
create_namespace switch "$1"
}
dev() {
errline=$lineno
device="$1"
shift
if [ ! "$current_name" ]
then
error="cannot define dev outside of a host or switch"
return 1
fi
if [ -f $NSTMP/devices/$device ]
then
error="$current_name/$device: already defined"
return 1
fi
local otherns=
local otherdev=
case $1 in
*/[a-zA-Z@]*)
otherns=$(echo $1 | cut -f1 -d/)
otherdev=$(echo $1 | cut -f2 -d/)
shift
if [ -f $MYTMP/ns/$otherns/devicepairs/$otherdev ]
then
error="$otherns/$otherdev: already has paired device"
return 1
fi
;;
esac
local type="$(cat $NSTMP/type)"
if [ "$*" != "" -a "$type" = "switch" ]
then
error="device in switch may not specify an IP address"
return 1
fi
f=$NSTMP/devices/${device%@*}
> $f
for ip in "$@"
do
case $ip in
*/*)
echo "$ip" >> $f
;;
*)
error="IP address should be expressed as ip/mask"
return 1
;;
esac
done
if [ "$otherdev" ]
then
echo "$current_name ${device%@*}" > $MYTMP/ns/$otherns/devicepairs/${otherdev%@*}
echo "n/a n/a" > $NSTMP/devicepairs/${device%@*}
echo "$otherns ${otherdev%@*}" >> $NSTMP/pairlist
echo $errline > $MYTMP/runtime-lines/$otherns-pair-${otherdev%@*}
fi
echo ${device%@*} >> $NSTMP/devlist
echo $errline > $MYTMP/runtime-lines/$current_name-dev-${device%@*}
return 0
}
route() {
errline=$lineno
if [ ! "$current_name" ]
then
error="can only specify route in a host"
return 1
fi
local type="$(cat $NSTMP/type)"
if [ "$type" = "switch" ]
then
error="can only specify route in a host"
return 1
fi
echo "$*" >> $NSTMP/routes
echo $errline >> $MYTMP/runtime-lines/$current_name-routes
return 0
}
bridgedev() {
errline=$lineno
device="$1"
shift
if [ ! "$current_name" ]
then
error="can only specify bridgedev in a host"
return 1
fi
local type="$(cat $NSTMP/type)"
if [ "$type" = "switch" ]
then
error="can only specify bridgedev in a host"
return 1
fi
if [ -f $NSTMP/devices/${device%@*} ]
then
error="$current_name/${device%@*}: already defined"
return 1
fi
ipf=$NSTMP/devices/${device%@*}
devf=$ipf-bridged
> $ipf
> $devf
for ipordev in "$@"
do
case $ipordev in
*/*)
echo "$ipordev" >> $ipf
;;
*)
echo "$ipordev" >> $devf
;;
esac
done
echo ${device%@*} >> $NSTMP/bridgelist
echo $errline > $MYTMP/runtime-lines/$current_name-dev-${device%@*}
return 0
}
exec() {
errline=$lineno
if [ ! "$current_name" ]
then
error="can only specify exec in a host or switch"
return 1
fi
echo "$*" >> $NSTMP/exec
echo $errline >> $MYTMP/runtime-lines/$current_name-exec
return 0
}
i2pdnode() {
errline=$lineno
if [ ! "$current_name" ]
then
error="can only specify i2pdnode in a host or switch"
return 1
fi
echo -e "$*" >> $NSTMP/i2pdnodeid
echo $errline >> $MYTMP/runtime-lines/$current_name-i2pdnodeid
return 0
}
i2preseed() {
errline=$lineno
if [ ! "$current_name" ]
then
error="can only specify i2preseed in a host or switch"
return 1
fi
echo -e "$*" >> $NSTMP/i2preseed
echo $errline >> $MYTMP/runtime-lines/$current_name-i2preseed
return 0
}
cd $MYTMP/setup
. $setupbase
errline=""
cd $CURDIR
exists_ns() {
if [ "$(ip netns list | grep "^$1\$")" ]
then
return 0
else
return 1
fi
}
dev_in_ns() {
ip netns exec $1 ip link list | grep "^[0-9]" | cut -d: -f2 | tr -d ' '
}
get_pids() {
# Not in all versions:
# ip netns pids $1
find -L /proc/[0-9]*/ns -maxdepth 1 -samefile /var/run/netns/$1 2>/dev/null | cut -f3 -d/
}
shutdown_ns() {
for i in $(dev_in_ns $1)
do
ip netns exec $1 ip link set ${i%@*} down
done
pids=$(get_pids $1)
if [ "$pids" ]; then kill $pids; sleep 1; fi
pids=$(get_pids $1)
if [ "$pids" ]; then kill -9 $pids; fi
}
startup_ns() {
for i in $(dev_in_ns $1)
do
ip netns exec $1 ip link set ${i%@*} up
done
}
while read ns
do
while read dev
do
read errline < $MYTMP/runtime-lines/$ns-dev-$dev
if [ ! -f $MYTMP/ns/$ns/devicepairs/$dev ]
then
error="$ns/$dev has no paired device"
exit 1
fi
done < $MYTMP/ns/$ns/devlist
while read otherns otherdev
do
read errline < $MYTMP/runtime-lines/$otherns-pair-$otherdev
if [ ! -f $MYTMP/ns/$otherns/devices/$otherdev ]
then
error="$otherns/$otherdev not defined to be paired with"
exit 1
fi
done < $MYTMP/ns/$ns/pairlist
done < $MYTMP/nslist
while read ns
do
read errline < $MYTMP/runtime-lines/$ns
error="shutting down namespace: $ns"
exists_ns $ns && shutdown_ns $ns
done < $MYTMP/nslist
while read ns
do
read errline < $MYTMP/runtime-lines/$ns
error="deleting namespace"
exists_ns $ns && ip netns del $ns
done < $MYTMP/nslist
if [ "$cleanonly" ]
then
error=""
exit 0
fi
while read ns
do
read errline < $MYTMP/runtime-lines/$ns
error="adding namespace"
type="$(cat $MYTMP/ns/$ns/type)"
ip netns add $ns
if [ "$type" = "switch" ]
then
error="adding bridge to switch namespace"
ip netns exec $ns brctl addbr switch
fi
done < $MYTMP/nslist
while read ns
do
type="$(cat $MYTMP/ns/$ns/type)"
while read dev
do
read errline < $MYTMP/runtime-lines/$ns-dev-${dev%@*}
read ons odev < $MYTMP/ns/$ns/devicepairs/${dev%@*}
if [ "$ons" != "n/a" ]
then
error="adding virtual ethernet to $type namespace"
ip link add ${dev%@*} netns $ns type veth peer netns $ons name ${odev%@*}
else
: # gets set up from the other end
fi
if [ "$type" = "switch" ]
then
error="adding virtual ethernet to bridge"
ip netns exec $ns brctl addif switch ${dev%@*}
fi
while read ip
do
error="adding ip address to virtual ethernet"
ip netns exec $ns ip addr add $ip dev ${dev%@*}
done < $MYTMP/ns/$ns/devices/${dev%@*}
done < $MYTMP/ns/$ns/devlist
while read bridge
do
read errline < $MYTMP/runtime-lines/$ns-dev-$bridge
error="adding bridge to host namespace"
ip netns exec $ns brctl addbr $bridge
while read dev
do
error="adding virtual interface to bridge"
ip netns exec $ns brctl addif $bridge ${dev%@*}
done < $MYTMP/ns/$ns/devices/$bridge-bridged
while read ip
do
error="adding ip to virtual interface"
ip netns exec $ns ip addr add $ip dev $bridge
done < $MYTMP/ns/$ns/devices/$bridge
done < $MYTMP/ns/$ns/bridgelist
done < $MYTMP/nslist
while read ns
do
read errline < $MYTMP/runtime-lines/$ns
error="starting namespace"
startup_ns $ns
while read route
do
errline=$(tr "\n" "/" < $MYTMP/runtime-lines/$ns-routes | sed -e s:/$::)
error="adding route to $ns"
ip netns exec $ns ip route add $route
done < $MYTMP/ns/$ns/routes
if [ -f $MYTMP/ns/$ns/exec ]
then
errline=$(tr "\n" "/" < $MYTMP/runtime-lines/$ns-exec | sed -e s:/$::)
error="running exec for $ns"
ip netns exec $ns sh -e $MYTMP/ns/$ns/exec
fi
if [ -f $MYTMP/ns/$ns/i2pdnodeid ]
then
errline=$(tr "\n" "/" < $MYTMP/runtime-lines/$ns-i2pdnodeid | sed -e s:/$::)
error="running i2pdnode for $ns"
routerId="`cat $MYTMP/ns/$ns/i2pdnodeid`"
#echo "Router id to handle is $routerId"
routerAddress="`ip netns exec $ns ip -o addr show veth0 | grep -v inet6 | awk '{ print $4 }' | sed 's#/24##'`"
ip netns exec $ns bash $DATADIR/testnode$routerId/i2pdwrapper.sh $routerId $ns $routerAddress
#i2pd \
# --datadir=$DATADIR/testnode$routerId \
# --log file \
# --daemon \
# --floodfill \
# --ipv4=1 \
# --host=$routerAddress \
# --ifname=veth0 \
# --logfile=$LOGDIR/router$routerId.log \
# --pidfile=$PIDDIR/router$routerId.pid \
# --port=4764 \
# --reseed.urls=$RESEEDSERVER || {
# echo "Can't start i2pd, total fucking error!"
# exit 1
# }
fi
if [ -f $MYTMP/ns/$ns/i2preseed ]
then
errline=$(tr "\n" "/" < $MYTMP/runtime-lines/$ns-i2preseed | sed -e s:/$::)
error="running i2preseed for $ns"
cd $RESEED_DIR
#i2p-tools keygen --tlsHost $RESEEDSERVER_IP
#i2p-tools keygen --signer $SIGNER
echo "Setting up reseed"
ip netns exec $ns i2p-tools reseed \
--numRi 20 \
--key $RESEED_DIR/${SIGNER_FNAME}.pem \
--netdb $RESEED_DIR/netDb \
--tlsHost $RESEED_DIR/${RESEEDSERVER_IP} \
--tlsCert $RESEED_DIR/${RESEEDSERVER_IP}.crt \
--tlsKey $RESEED_DIR/${RESEEDSERVER_IP}.pem \
--signer $RESEED_DIR/${SIGNER_FNAME} &
cd -
fi
done < $MYTMP/nslist
error=""
while read ns
do
echo "---------------------- $ns --------------------"
ip netns exec $ns ip addr show
ip netns exec $ns ip route show
ip netns exec $ns brctl show
echo ""
done < $MYTMP/nslist
exit 0
</code></pre></div></div>
<p><strong>Script hacking/implementation notes:</strong></p>
<ul>
<li>The max 128 routers is just a hardcoded value at line 36 and can be changed. However you don’t need to adjust if you run less nodes.</li>
<li>For i2pd arugments, find the I2PDWRAPPER variable in the top of the script.</li>
<li>I learned a new bash skill, <code class="language-plaintext highlighter-rouge">${device%@*}</code> will from for example with <code class="language-plaintext highlighter-rouge">veth1@if1</code> remove <code class="language-plaintext highlighter-rouge">@</code> and whatever that’s behind it resulting in <code class="language-plaintext highlighter-rouge">veth1</code>. <code class="language-plaintext highlighter-rouge">%</code> can also be used with <code class="language-plaintext highlighter-rouge">.</code> for file extension magic and so on.</li>
</ul>
<p>Other tools I’ve found handy are <a href="http://www.openvswitch.org/">OpenVSwitch</a> and <a href="http://mininet.org">Mininet</a>. OpenVSwitch gets really handy once we start talking about clusters and scaling horizonal when single-box setups isn’t enough anymore. It’s for example used behind the scenes in the network component of the <a href="https://docs.openstack.org/newton/networking-guide/deploy-ovs-provider.html">Openstack</a> cloud platform.</p>Mikal VillaFor a long time, at least internally it’s been talking about the need of a testnet for I2P. Testing in production isn’t trivial :)Gitifying I2P: How to make git clone resumable2018-09-07T20:50:45+02:002018-09-07T20:50:45+02:00https://0xcc.re/2018/09/07/gitifying-i2p-how-to-make-git-clone-resumable<p>Have you ever cloned a git repository on a bad internet connection? I have, it <strong>don’t work</strong>. When I lived in the Philippines for a year and a half, I had to clone repositories to a server of mine in Norway, for then sharing the repo via torrent so I could download it to my laptop in the Philippines. And it seems many has asked for this feature, or workarounds when I’ve googled the topic. Luckly, it’s hope for git and resumable clone/fetch(es).</p>
<p>This year, I’m so lucky that I got the chance to work on I2P fulltime. I2P is a mix-network, somewhat alike Tor. <a href="https://geti2p.net/en/comparison/tor">This link</a> compares the two. Working for&with I2P is beyond aweome, but we got a minor problem which I bet almost nobody of the readers have used, <strong>ever</strong>. It’s our version control system named <a href="https://www.monotone.ca/">Monotone</a>, it’s actually not that bad in itself, but it’s a couple of issues with it, which is:</p>
<ul>
<li>No one afaik uses it besides us (the I2P project).</li>
<li>Verry little documentation of it, if you google it - you won’t find anything.</li>
<li>It’s not been updated since 2011.</li>
<li>It’s hard to get into, even with knowledge from other VCS.</li>
</ul>
<p>Monotone was choosen before the git revolution, and it was a quite good tool cause it supported cryptographic signing of commits and resume on network errors, which is perfect for I2P.</p>
<p>However, now many years later Git is familiar to almost every developer, and it supports both signed commits, but also recovery/resume out of box! And no, I’m not talking about the achive or bundle features.</p>
<p>This started with research into git internals and git remote helpers, and after a carefully look into all of the git binaries and scripts, I found an option in a rather hidden git subcommand named http-fetch. You could do <code class="language-plaintext highlighter-rouge">man git-http-fetch</code> or read the manual online at <a href="https://git-scm.com/docs/git-http-fetch#git-http-fetch---recover">git’s webpage</a> if you wish to read the manual on it.</p>
<p>For my test over I2P, I decided to repack the server’s git repository with the command <code class="language-plaintext highlighter-rouge">git --git-dir=bare-test-repo.git repack --threads 8 --max-pack-size 1m -A -d -f -F</code> which would create a lot of pack files with a maximum size of 1mb each, instead of a big one on 300mb. Please also note that I’ve just by tweaking parameters and repacked/unpacked objects have grown and shrinked the Git repo from 646mb, 384mb and 148mb without adding or removing anything from the content - so it’s performance and optimalization possibilities here. Note that Git by default keeps everything in one packfile cause it’s the best for performance. At last, I served the Git repository over http with <a href="https://github.com/schacon/grack">grack</a> and pointed a I2P tunnel towards it.</p>
<p>I also noticed http-fetch will always try loose objects first so you can both split your pack files into chuncks that fits your need or just keep them all loose. The Git book in the internals section can learn you more about packfiles and loose objects <a href="https://git-scm.com/book/en/v2/Git-Internals-Packfiles">here</a>.</p>
<p>Here is a script to unpack packfiles to loose objects in a git repository:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#!/bin/sh
if [ -f .git/objects/pack/*.pack ]; then
mkdir /tmp/tmpgit.$$
GIT_DIR=/tmp/tmpgit.$$ git init
for pack in .git/objects/pack/*.pack; do
GIT_DIR=/tmp/tmpgit.$$ git unpack-objects < $pack
if [ $? -ne 0 ]; then
echo "Unpack of $pack failed, aborting"
exit 1
fi
done
rsync -a --info=PROGRESS2 --delete /tmp/tmpgit.$$/objects/ .git/objects/
rm -fr /tmp/tmpgit.$$
else
echo "No packs to unpack"
exit 1
fi
</code></pre></div></div>
<p>name it <code class="language-plaintext highlighter-rouge">git-unpack</code> and place it it PATH, and you can use it as <code class="language-plaintext highlighter-rouge">git unpack</code> inside the root directory of your Git repository :)</p>
<p>A sample from access log produced by grack:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>127.0.0.1 - - [07/Sep/2018:21:33:34 +0200] "GET /i2p.git/objects/info/packs HTTP/1.1" 200 28303 0.0024
127.0.0.1 - - [07/Sep/2018:21:33:37 +0200] "GET /i2p.git/objects/74/275493be573bc4917f0dec38f141e6d3d1dec3 HTTP/1.1" 404 - 0.0008
127.0.0.1 - - [07/Sep/2018:21:33:39 +0200] "GET /i2p.git/objects/69/78829554290491439f4515943b88f740a8fdca HTTP/1.1" 404 - 0.0009
127.0.0.1 - - [07/Sep/2018:21:33:41 +0200] "GET /i2p.git/objects/fa/05f88bfe4a8e1ae4b07822ccd0d7176615c420 HTTP/1.1" 404 - 0.0010
127.0.0.1 - - [07/Sep/2018:21:33:43 +0200] "GET /i2p.git/objects/d0/9e885133c6a23773a040562ec96e66f7e00c29 HTTP/1.1" 404 - 0.0014
127.0.0.1 - - [07/Sep/2018:21:33:45 +0200] "GET /i2p.git/objects/ea/d1f93d7c91c660c2ec71dd0076d4c61e44dfbb HTTP/1.1" 404 - 0.0009
127.0.0.1 - - [07/Sep/2018:21:33:46 +0200] "GET /i2p.git/objects/pack/pack-27414fbc03964eeff8b3922ef6bea0b1ade4fa6e.pack HTTP/1.1" 200 1047379 0.0825
</code></pre></div></div>
<p>So to actually be able to clone a Git repository over I2P you’ll have to switch clone with three commands which of course can be combined to a script. git-http-fetch expects to be run inside an repo, so we can’t use it with the clone subcommand. Instead take a look bellow.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mkdir i2p.git
cd i2p.git
git init # Do not add anything to it, no commits, leave it empty.
# Next command will list all refs (branches/tags) with their HEAD hashes.
http_proxy=127.0.0.1:4444 curl -v http://4grg5bjtr5m6fpat2wpwfq7axpfhkuvmsna5z4iqlthu72k3bv5a.b32.i2p/i2p.git/info/refs
# this can return something like:
# 4ead982831f4097ff058a847fdb2b89cc97d2ad7 refs/heads/master
# the hash starting on 4ead9 is needed in next command
# --
# The command bellow can be placed in a loop that check it's exit code
# and exit the loop when it's 0, but retries if not.
# This is the resumable magic :D
http_proxy=127.0.0.1:4444 git http-fetch --recover -a 4ead982831f4097ff058a847fdb2b89cc97d2ad7 http://4grg5bjtr5m6fpat2wpwfq7axpfhkuvmsna5z4iqlthu72k3bv5a.b32.i2p/i2p.git
</code></pre></div></div>
<p>With the commands above, I was able to “clone” the whole I2P codebase (git-export from monotone with full commit history) hosted at one I2P router, to my laptop via another I2P router connecting to the git-host router.</p>
<p>Another option is to use wget or something and then pipe it into the pack parser utils in Git. To get a list of packs to download do a <code class="language-plaintext highlighter-rouge">GET /i2p.git/objects/info/packs</code> on the server. Basically you append <code class="language-plaintext highlighter-rouge">/objects/info/packs</code> to the repository url.</p>
<p>Short conclution:
Git continued where it stopped both at a network read error, and after I killed the process, so http-fetch can indeed recover with the recover flag.
Also, I’m not saying it’s not better options out there for resumable Git, but I’ve at least not found it yet in that case :)</p>
<p>I’ve collected some links I’ve looked over and some manuals in a <a href="https://gist.github.com/mikalv/70d2327eaa5634372a3efa9026004a01">gist</a> which anyone can look into. Since this webpage also is available on I2P as 0xcc.i2p, and GitHub isn’t - I’ll embedd the markdown bellow.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Git in depth
## How it works
* https://git-scm.com/book/en/v2/Git-Internals-Packfiles
* https://git-scm.com/book/en/v2/Git-Internals-Transfer-Protocols
* https://rovaughn.github.io/2015-2-9.html
* https://git-scm.com/docs
* http://shafiulazam.com/gitbook/7_how_git_stores_objects.html
* https://git.wiki.kernel.org/index.php/Git_FAQ
* https://mirrors.edge.kernel.org/pub/software/scm/git/docs/gitremote-helpers.html
* https://docs.google.com/document/d/1X5SnleaX4qpLCc4QMAMWdvrA5QRsUO-YxXKjhSZRPpY/edit#
* http://marklodato.github.io/visual-git-guide/index-en.html
* http://gitready.com/beginner/2009/02/17/how-git-stores-your-data.html
* https://chromium.googlesource.com/chromium/src/+/master/docs/git_cookbook.md
* https://book.git-scm.com/
* https://github.com/peritus/git-remote-couch/blob/master/src/git_remote_couch/__init__.py
* https://mirrors.edge.kernel.org/pub/software/scm/git/docs/technical/pack-format.txt
* http://repo.or.cz/w/git.git?a=tree;f=Documentation/technical;hb=HEAD
* https://www.atlassian.com/blog/git/tear-apart-repository-git-way
* https://github.com/potherca-bash/git-split-file
* https://githubengineering.com/counting-objects/
* https://msdn.microsoft.com/en-us/magazine/mt493250.aspx
* https://hackernoon.com/when-you-git-in-trouble-a-version-control-story-97e6421b5c0e
## Commands of special interest
* `git verify-pack -v .git/objects/pack/pack-*.idx` - Output data about the indexes
* `git show-index < .git/objects/pack/pack-*.idx` - Output data about the indexes (more and other format than above)
* `git rev-list REFS` - list all revs in a ref
* `find .git/objects -type f` - get all object files
* `find .git/refs -type f` - list all refs
* `git count-objects -v` - list objects and types
## Manuals worth looking into
Many, even hidden commands can be found @ `git help -a`
* git-cat-file
* git-ls-tree
* git-show-ref
* git-rev-list
* git-upload-pack
* git-show-index
* git-unpack-objects
* git-unpack-file
* git-index-pack
* git-fetch-pack
* git-gc
* git-fast-import
* git-ls-files
* git-pack-refs
* git-index-pack
* git-http-fetch
</code></pre></div></div>Mikal VillaHave you ever cloned a git repository on a bad internet connection? I have, it don’t work. When I lived in the Philippines for a year and a half, I had to clone repositories to a server of mine in Norway, for then sharing the repo via torrent so I could download it to my laptop in the Philippines. And it seems many has asked for this feature, or workarounds when I’ve googled the topic. Luckly, it’s hope for git and resumable clone/fetch(es).Russia, shame on you! Censorship is lame.2018-04-19T20:13:35+02:002018-04-19T20:13:35+02:00https://0xcc.re/2018/04/19/russia-shame-on-you-censorship-is-lame<p>Or in Russian; Позор Российскому руководству! Цензура это полный бред.</p>
<ul>
<li>NOTE: In the bottom, this text is translated into Russian by orignal from I2Pd. Thanks to him. I barely knows a word Russian myself.</li>
</ul>
<p>Before, I kind of respected you, as the gov. - But now you totally lost me.</p>
<p>Anyway, I’m deeply invovled in the I2P project these days. And saw a way to help poor russian citicens reaching Telegram even their government is totally fucked up. Long live the internet!</p>
<p>To answer Russia’s block on Telegram, I introduce you the I2P Telegram bundle. It’s for non technical people it’s using the I2P router to create a hidden tunnel outside of the Russia’s answer of the great wall of China.</p>
<p>Here is a screenshot of the first working bundle I put up:</p>
<p><img src="/content/images/2018/04/Telegram_I2P_Bundle.png" alt="Telegram_I2P_Bundle" /></p>
<p>For impationt people like me, please remember that I2P isn’t exactly the same as the Tor project which you might have heard about. I2P handles the bootstrap way somewhat much more desentralised, and therefore uses longer time to bootstrap the first run you do - or if it’s a long time since last time. In general, the more your i2pd process run, the better and more stable it will be. A common thing to see in a such case is shown in the picture bellow. If you get that, please note it’s then a perfect time to take a cup of tea, caffe or a weed joint.</p>
<p><img src="/content/images/2018/04/telegram-reconnect-waiting-for-i2pd.png" alt="telegram-reconnect-waiting-for-i2pd" /></p>
<p>Going the China way isn’t the option, can’t stop saying that..</p>
<p>Anyway, for more technical interested people, it’s just a simple bat script that launches a pre configured i2pd daemon to run in background - while it also launches the Telegram portable binary, which is preconfigured to use the SOCKSv5 proxy that the i2pd daemon will provide. No more magic than that.</p>
<p><strong>I2P Telegram Bundle v0.1alpha</strong></p>
<p>Meeh’s mirror: https://furu1.censorship.help/I2P-Telegram/
Dropbox Mirror: https://www.dropbox.com/sh/wf4o39xtnzpv2uq/AAB3guCOeORX48Vf9jCSXvesa?dl=0
Google Drive Mirror: https://drive.google.com/open?id=1XsIEQaTj0Qnh3OFCSUx8UdbAPV8n5-gb
Github Mirror: https://github.com/mikalv/UnblockTelegramInRussia/releases/tag/v0.1a</p>
<p>——– Russian translation ———</p>
<p>Позор Российскому руководству! Цензура это полный бред.</p>
<p>Ранее, я относился к вам с уважением, как к правительству. Но не в данном случае.</p>
<p>Я занимаюсь I2P давно и серьезно. И вижу способ помочь бедным россиянам с доступом в телеграм, несмотря на действия правительства в этом направлении. Они вполне могут меня заказать, ну и что? Настоящая долгая жизнь - это Интернет.</p>
<p>В ответ на блокировку Телеграма, я хочу предложить сборку Телеграма с I2P, для обычных пользователей. Объясняя на пальцах, здесь создается скрытый тоннель через I2P за пределы российского аналога “Золотого щита” (Великий китайский файрвол. - прим.пер.).</p>
<p>Как это выглядит на практике:</p>
<p><img src="/content/images/2018/04/Telegram_I2P_Bundle.png" alt="Telegram_I2P_Bundle" /></p>
<p>Для нетерпеливых людей, типа меня, запомните: I2P это ни разу не Tor. I2P стартует с более децентрализованной сети, и потому требует больше времени на запуск в первый раз.
Однако, чем дольше i2pd работает, тем лучше и стабильнее становится доступ.
Как правило, вы увидите у себя примерно то что на картинке ниже. Если это так, то у вас есть возможность сходить выпить чашечку чая или кофе.</p>
<p><img src="/content/images/2018/04/telegram-reconnect-waiting-for-i2pd.png" alt="telegram-reconnect-waiting-for-i2pd" /></p>
<p>Не устаю повторять, что попытка идти китайским путем - не лучшее решение.</p>
<p>Для понимающих людей, технически - это простой батник, запускаюший сконфигурированный i2pd демон в фоновом режиме, а затем, и Телеграм, с заданным настройками для работы через SOCKSv5 прокси, обеспечиваемый i2pd. Короче говоря - никакой магии.</p>
<p><strong>I2P Telegram Bundle v0.1alpha</strong></p>
<p>Meeh’s mirror: https://furu1.censorship.help/I2P-Telegram/
Dropbox Mirror: https://www.dropbox.com/sh/wf4o39xtnzpv2uq/AAB3guCOeORX48Vf9jCSXvesa?dl=0
Google Drive Mirror: https://drive.google.com/open?id=1XsIEQaTj0Qnh3OFCSUx8UdbAPV8n5-gb
Github Mirror: https://github.com/mikalv/UnblockTelegramInRussia/releases/tag/v0.1a</p>Mikal VillaOr in Russian; Позор Российскому руководству! Цензура это полный бред.R.I.P Return-oriented Programming (ROP)2018-04-02T13:12:12+02:002018-04-02T13:12:12+02:00https://0xcc.re/2018/04/02/r.i.p-return-oriented-programming<p>Intel has this <a href="https://software.intel.com/sites/default/files/managed/4d/2a/control-flow-enforcement-technology-preview.pdf">Control-flow Enforcement Technology (CET)</a>, as of October 20th 2017 there are no Intel processors currently being sold that support it yet. But it will be available sooner or later- however for now, have fun :)</p>
<p>Control-flow Enforcement Technology aims to prevent return-oriented programming (ROP) and call-jump-oriented programming (COP/JOP) attacks. The Intel-developed technology tries to prevent control-flow attacks by the concept of having a shadow stack to keep track of the expected return addresses and will raise faults if the return addresses does not match what is expected by the shadow stack. CET also has indirect branch tracking for stopping jump/call oriented attacks.</p>
<p>Also noteworthy, I don’t see any indication that kernel support would be required, since these instructions that GCC would be using are platform independent and would/could be used on any system we know of.</p>
<p>This will be supported by GCC 8.1, the start of the work for implementing this in GCC can be found at <a href="https://gcc.gnu.org/git/?p=gcc.git;a=commitdiff;h=3c0f15b4cebccdbb6388a8df5933e69c9b773149">https://gcc.gnu.org/git/?p=gcc.git;a=commitdiff;h=3c0f15b4cebccdbb6388a8df5933e69c9b773149</a>.</p>Mikal VillaIntel has this Control-flow Enforcement Technology (CET), as of October 20th 2017 there are no Intel processors currently being sold that support it yet. But it will be available sooner or later- however for now, have fun :)RSA key strength and math2017-08-24T12:37:33+02:002017-08-24T12:37:33+02:00https://0xcc.re/2017/08/24/rsa-strength-and-math<p><strong>TL;DR</strong> Since time is a factor here, \(2048\) bit is probably fine for most systems and users as long as it is replaced often, like let’s encrypt does with it’s <a href="https://letsencrypt.org/2015/11/09/why-90-days.html">ninety-day lifetimes</a>. However for an <a href="https://en.wikipedia.org/wiki/Certificate_authority">CA</a> I wouldn’t use less than \(4096\) bit keys, and probably \(8192\) bit keys if they where to live longer than ~2030 maybe.</p>
<p><em>Update: There are algorithms with sub-exponential running time for factoring integers, which is much more faster than iterating through all the numbers. I’m aware of that.</em></p>
<p>From people that really knows math, I’ve often heard my friends which studied math or physics on a higher level, but also randoms say that choose to use really big RSA or ECC keys, don’t understand the math behind it. Maybe it’s true, maybe it’s not. Let’s find out.</p>
<p>For myself, I’m not the best in math for sure, back at school I had great grades in math, but up in my teenagers I realized it was more brainwashing and training to listen to authority. I remember I joked about who would ever need algebra after school. Well, here I am learning machine learning with algebra. (Fail). So don’t take me as an expert in math. This also means that if you got some higher education in math, I’ll probably just make you bored by this post.</p>
<p>Many believes bigger is better, for various of reasons. However, for TLS traffic \(2048\)bit keys are still not unrecommended. How come? Let’s try the math.</p>
<p>First I want to mention that it’s estimated to be \(7.5x10^{18}\) grain of sand on the Earth.
That’s \(7,5E18\) or in human readable form <em>7,500,000,000,000,000,000</em>.
So let’s for fun see the result if we multiply it with itself.</p>
<p>(\(7.5x10^{18})x(7.5x10^{18}\)) = <code class="language-plaintext highlighter-rouge">56,250,000,000,000,000,000,000,000,000,000,000,000</code></p>
<p>That’s still only a 38 digit number, even we multiplied the estimated amount of grain of sand on the Earth with itself.
On the other hand it’s also estimated to be \(10^{80}\) atoms in the visible universe. That’s a <em>little number</em> with only 81 digits.
<code class="language-plaintext highlighter-rouge">100,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000,000,000,000</code></p>
<p>Here is a list of how much you can store vs amount of bits.</p>
<ul>
<li>In \(1\)bit you can store \(2\) different values (0 or 1).</li>
<li>In \(4\)bits you can store \(16\) different values (\(2x2x2x2\) or \(2^{4}\)).</li>
<li>In \(8\)bits you can store \(256\) different values (\(2x2x2x2x2x2x2x2\) or \(2^{8}\) or \(2E2\)).</li>
<li>In \(16\)bits you can store \(65536\) different values (\(2^{16}\) or \(6.5536E4\)).</li>
<li>In \(32\)bits you can store \(4294967296\) different values (\(2^{32}\) or \(4.2E9\)).</li>
<li>In \(64\)bits you can store \(18446744073709551616\) different values (\(2^{64}\)).</li>
<li>In \(128\)bits you can store \(340282366920938463463374607431768211456\) different values (\(2^{128}\)).</li>
</ul>
<p>And so on.</p>
<p>At \(128\)bit we crossed 38 digits, with 39 digits. That means at \(128\)bit we’re talking about numbers greater than the amount of grain of sand on Earth, multiplied by itself.</p>
<p>Could we brute-force a \(1024\) bit key? Where the approach had been to enumerate every possible key?</p>
<p>Short answer: <strong>Nope.</strong></p>
<p>The number of primes smaller than \(x\) is <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">approximately</a> \(\frac{x}{\ln x}\). Therefore the number of \(512\)bit primes (about the length you need for \(1024\)bit modulus) is approximately:</p>
\[\frac{2^{513}}{\ln 2^{513}}-\frac{2^{512}}{\ln 2^{512}} \approx 2.76×10^{151}\]
<p>RSA keys is a pair of two distinct prime numbers, that’s the holy secret behind it. The number of RSA moduli is therefore:</p>
\[\frac{(2.76×10^{151})^2}{2}-2.76×10^{151}=1.88×10^{302}\]
<p>Wikipedia tells me there contains about \(10^{80}\) atoms in the <a href="http://en.wikipedia.org/wiki/Observable_universe#Matter_content">observable universe</a>. Assume that you could use each of those atoms as a CPU, and each of those CPUs could enumerate one modulus per millisecond. To enumerate all \(1024\) bit RSA moduli you would need:</p>
\[\frac{x}{\ln x}\]
\[\frac{2^{513}}{\ln 2^{513}}-\frac{2^{512}}{\ln 2^{512}} \approx 2.76×10^{151}\]
<p>$$\begin{eqnarray*}
1.88×10^{302}ms / 10^{80}&=&1.88×10^{222}ms\\
&=&1.88×10^{219}s\\
&=&5.22×10^{215}h\\
&=&5.95×10^{211} \text{years}\\
\end{eqnarray*}$$</p>
<p>Just as a comparison: the universe is about \(13.75×10^{9}\) years old.</p>
<p>There are much faster ways to find out a secret key however. There are algorithms with sub-exponential running time for factoring integers. But as I’ve understood it, it’s still gonna take quite some time to find the key. I haven’t had time to learn about it yet, but I got some links if you want to get deeper into that topic. <a href="https://en.wikipedia.org/wiki/Integer_factorization">Integer factorization at wikipedia</a> and Chapter 15 - Factoring and Discrete Logarithms in Subexponential Time in a book named “Mathematics of Public Key Cryptography” by Steven Galbraith. The chapter is available in <a href="https://www.math.auckland.ac.nz/~sgal018/crypto-book/ch15.pdf">pdf here</a></p>
<p>However, \(1024\) bit is not recommended anymore and it’s nothing less than 309 digits. \(768\)bit keys have also been proved broken. That’s a bit freaky.</p>
<p>Back to \(1024\) bit, those numbers above doesn’t even start closing in on how big number \(2048\) bit is, it’s freaking 617 digits, where \(1024\) bit was 309 and \(768\) bit is 232 digits. You can try the calculation yourself at <a href="https://www.wolframalpha.com/input/?i=2%5E768">wolframalpha</a>. The calculation is \(2^{2048}\).</p>
<p>Remember as earlier said, I’m not a math expert, but my conclusion on this topic is that \(2048\) bit seems fine for internal usage, webpages without sensitive information. Since time is a factor here, \(2048\) bit is probably fine for most systems and users as long as it is replaced often, like let’s encrypt does with it’s <a href="https://letsencrypt.org/2015/11/09/why-90-days.html">ninety-day lifetimes</a>. However for an <a href="https://en.wikipedia.org/wiki/Certificate_authority">CA</a> I wouldn’t use less than \(4096\) bit keys, and probably \(8192\) bit keys if they where to live longer than ~2030 maybe. In the end, maybe I’m too paranoid, but my minimal key size will still be \(4096\) bit as long as it’s an option versus \(2048\) bit.</p>Mikal VillaTL;DR Since time is a factor here, \(2048\) bit is probably fine for most systems and users as long as it is replaced often, like let’s encrypt does with it’s ninety-day lifetimes. However for an CA I wouldn’t use less than \(4096\) bit keys, and probably \(8192\) bit keys if they where to live longer than ~2030 maybe.Setup EFI Development environment on Mac OSX Sierra (10.12.X)2017-07-10T05:55:39+02:002017-07-10T05:55:39+02:00https://0xcc.re/2017/07/10/setup-efi-development-on-macosx-sierra-10-12<p>Oh no! a lot of text. Well, luckly half of the post is troubleshooting. EFI development setup is easy :)</p>
<p>Okay, before starting this guide you should have some tools installed already.</p>
<ul>
<li>Mac OS X. For this guide I run version 10.12.5 (16F73)</li>
<li>Xcode 8. For this guide I run version 8.3.2 (8E2002)</li>
<li>Homebrew. A package manager for Mac OS X. https://brew.sh</li>
</ul>
<p>First of all visit <a href="https://opensource.apple.com/release/developer-tools-821.html">https://opensource.apple.com/release/developer-tools-821.html</a></p>
<p>You need to download the <code class="language-plaintext highlighter-rouge">cctools</code> package. This package contains various tools to deal with Mach-O files, which is the default binary file format used by the XNU kernel. It is an equivalent to the GNU binutils package on the GNU OS. (<a href="https://opensource.apple.com/tarballs/cctools/cctools-895.tar.gz">directlink</a>)</p>
<p>While you’re still at it, download the <code class="language-plaintext highlighter-rouge">ld64</code> package as well. This package contains the dynamic linker ld, as well as other tools and libraries related to it. It replaces the old ld-classic from the cctools package that was not 64 bit-capable. (<a href="https://opensource.apple.com/tarballs/ld64/ld64-274.2.tar.gz">directlink</a>)</p>
<p>You will also need some llvm headers, because of that you need to download the llvm source at http://releases.llvm.org/download.html#4.0 (<a href="http://releases.llvm.org/4.0.0/llvm-4.0.0.src.tar.xz">directlink</a>)</p>
<p>Just to not confuse with paths, set <code class="language-plaintext highlighter-rouge">export EFIWORKSPACE=~/EfiWorkspace</code> and make sure it exists with <code class="language-plaintext highlighter-rouge">mkdir $EFIWORKSPACE</code></p>
<h3 id="buidling-ld64">Buidling <code class="language-plaintext highlighter-rouge">ld64</code></h3>
<p>First of all, I recommend that you patch the project file to avoid errors it’s a great chance you get if not. The patch is pasted below.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>--- ld64.xcodeproj/project.pbxproj2 2017-05-28 18:14:26.000000000 +0200
+++ ld64.xcodeproj/project.pbxproj 2017-05-28 18:29:44.000000000 +0200
@@ -1463,19 +1463,26 @@
F933D92409291AC90083EAC8 /* Debug */ = {
isa = XCBuildConfiguration;
buildSettings = {
+ ARCHS = "$(ARCHS_STANDARD_32_64_BIT)";
GCC_DYNAMIC_NO_PIC = NO;
GCC_TREAT_WARNINGS_AS_ERRORS = NO;
+ HEADER_SEARCH_PATHS = "$(SRCROOT)/../cctools-895/include/";
ONLY_ACTIVE_ARCH = YES;
SDKROOT = macosx.internal;
+ USER_HEADER_SEARCH_PATHS = "";
};
name = Debug;
};
F933D92509291AC90083EAC8 /* Release */ = {
isa = XCBuildConfiguration;
buildSettings = {
+ ARCHS = "$(ARCHS_STANDARD_32_64_BIT)";
GCC_DYNAMIC_NO_PIC = NO;
GCC_TREAT_WARNINGS_AS_ERRORS = NO;
+ HEADER_SEARCH_PATHS = "$(SRCROOT)/../cctools-895/include/";
+ ONLY_ACTIVE_ARCH = YES;
SDKROOT = macosx.internal;
+ USER_HEADER_SEARCH_PATHS = "";
};
name = Release;
};
@@ -1500,9 +1507,12 @@
F9849FF810B5DE8E009E9878 /* Release-assert */ = {
isa = XCBuildConfiguration;
buildSettings = {
+ ARCHS = "$(ARCHS_STANDARD_32_64_BIT)";
GCC_DYNAMIC_NO_PIC = NO;
GCC_TREAT_WARNINGS_AS_ERRORS = NO;
+ HEADER_SEARCH_PATHS = "$(SRCROOT)/../cctools-895/include/";
SDKROOT = macosx.internal;
+ USER_HEADER_SEARCH_PATHS = "";
};
name = "Release-assert";
};
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ln -sf /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk $EFIWORKSPACE/ld64-274.2/macosx.internal
# Depending on if you want a debug or release build, choose one of the following:
# Debug
xcodebuild ARCHS="i386 x86_64" ONLY_ACTIVE_ARCH=NO -sdk /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk -scheme libprunetrie -configuration Debug -destination 'platform=OS X,arch=x86_64' build
# Release
xcodebuild ARCHS="i386 x86_64" ONLY_ACTIVE_ARCH=NO -sdk /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk -scheme libprunetrie -configuration Release -destination 'platform=OS X,arch=x86_64' build
# Mine ended up in /Users/mikalv/Library/Developer/Xcode/DerivedData/ld64-ffssbvtdnxqygzfcarplcybwxblo/Build/Products/Debug/libprunetrie.a
# Look at the last output from the command to find yours
# Copy it to a path that cctools makefile will look for libraries in
cp /Users/mikalv/Library/Developer/Xcode/DerivedData/ld64-ffssbvtdnxqygzfcarplcybwxblo/Build/Products/Debug/libprunetrie.a /usr/local/lib/libprunetrie.a
</code></pre></div></div>
<h3 id="running-cmake-on-llvm">Running cmake on <code class="language-plaintext highlighter-rouge">llvm</code></h3>
<p>Next up, we need to run cmake on llvm, but we don’t need to compile it.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mkdir $EFIWORKSPACE/build-llvm
cd $EFIWORKSPACE/build-llvm
cmake $EFIWORKSPACE/../llvm-4.0.0.src
cp include/llvm/Support/DataTypes.h $EFIWORKSPACE/../cctools-895/include/llvm/Support/
cd $EFIWORKSPACE
</code></pre></div></div>
<h3 id="buidling-cctools">Buidling <code class="language-plaintext highlighter-rouge">cctools</code></h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cp $EFIWORKSPACE/cctools-895/include/llvm-c/Disassembler.h $EFIWORKSPACE/
rm -fr $EFIWORKSPACE/cctools-895/include/llvm-c
cp -r $EFIWORKSPACE/llvm-4.0.0.src/include/llvm-c cctools-895/include/llvm-c
cd $EFIWORKSPACE/cctools-895
make
cd efitools
make
# Finally we got the mtoc binary, now copy it to a directory in your PATH
cp mtoc.NEW /usr/local/bin/mtoc
cd $EFIWORKSPACE
</code></pre></div></div>
<h3 id="installing-remaining-dependencies">Installing remaining dependencies</h3>
<p>In this guide I assume you got <code class="language-plaintext highlighter-rouge">/usr/local/bin</code> in your path because of Homebrew. If not, this will not work at all.</p>
<p>Now we need to install, or upgrade the tools we need from Homebrew.
<code class="language-plaintext highlighter-rouge">brew install nasm acpica qemu</code></p>
<p>You can verify all tools are installed with checking their version.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mtoc
qemu-system-x86_64 --version
nasm –v
iasl –v
</code></pre></div></div>
<h3 id="edk-ii">EDK II</h3>
<p>Finally we’re done with dependencies and can start with the fun part. Since using git tags is too complicated for the EDK II developers, we have to sync via commit hashes. The one I’m using is <code class="language-plaintext highlighter-rouge">f4d3ba87bb8f5d82d3b80532ea4c83b7bbca41c0</code></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd $EFIWORKSPACE
git clone https://github.com/tianocore/edk2.git
cd edk2
make -C BaseTools
# Optionally use the same version as me.
git checkout f4d3ba87bb8f5d82d3b80532ea4c83b7bbca41c0
source edksetup.sh
</code></pre></div></div>
<p>Now you can start your own projects and play around with EFI. Enjoy :)</p>
<p><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /></p>
<h2 id="troubleshooting-compiler-errors">Troubleshooting compiler errors</h2>
<p>If you encounte</p>
<p>r errors, I collected a list below of some errors that you might have gotten.</p>
<h4 id="the-missing-llvmsupportdatatypesh-header-file">The missing <code class="language-plaintext highlighter-rouge">llvm/Support/DataTypes.h</code> header file</h4>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>=========== /Applications/Xcode.app/Contents/Developer/usr/bin/make all for libstuff =============
cc -std=c99 -Os -DLTO_SUPPORT -g -I../../include -Wall -D_MACH_I386_THREAD_STATUS_FPSTATE_LEGACY_FIELD_NAMES_ -D_ARCHITECTURE_I386_FPU_FPSTATE_LEGACY_FIELD_NAMES_ -c \
-I/Developer/usr/local/include \
-I/usr/local/include \
-o ./lto.o ../lto.c
cc -Os -DLTO_SUPPORT -g -I../../include -Wall -D_MACH_I386_THREAD_STATUS_FPSTATE_LEGACY_FIELD_NAMES_ -D_ARCHITECTURE_I386_FPU_FPSTATE_LEGACY_FIELD_NAMES_ -c -o ./llvm.o ../llvm.c
In file included from ../llvm.c:6:
../../include/llvm-c/Disassembler.h:18:10: fatal error: 'llvm/Support/DataTypes.h' file not found
#include "llvm/Support/DataTypes.h"
^
1 error generated.
make[2]: *** [llvm.o] Error 1
</code></pre></div></div>
<p><strong>Solution is:</strong>
If you’re missing <code class="language-plaintext highlighter-rouge">DataTypes.h</code> you probably forgot to run cmake on <code class="language-plaintext highlighter-rouge">llvm</code> and copy the header generated. There is an <code class="language-plaintext highlighter-rouge">DataTypes.h.cmake</code> in that directory, which gets generated to <code class="language-plaintext highlighter-rouge">DataTypes.h</code> after running cmake.</p>
<h4 id="the-missing-mach-oprune_trieh-header-file">The missing <code class="language-plaintext highlighter-rouge">mach-o/prune_trie.h</code> header file</h4>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cc -Os -DLTO_SUPPORT -DTRIE_SUPPORT -g -Wall -I. -I./../include -I. -I/usr/local/include -c \
-o ./nmedit.o ./strip.c -DNMEDIT
./strip.c:50:10: fatal error: 'mach-o/prune_trie.h' file not found
#include <mach-o/prune_trie.h>
^
1 error generated.
make[1]: *** [nmedit.o] Error 1
make: *** [all] Error 1
</code></pre></div></div>
<p><strong>Solution is:</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd /usr/include/mach-o
sudo wget https://gist.githubusercontent.com/mikalv/205beaae50d688c1be8816a4ac74cca8/raw/253348d8d558c7f627acef0832149bc32d6791d9/prune_trie.h
</code></pre></div></div>
<h4 id="the-missing-libprunetriea-library">The missing <code class="language-plaintext highlighter-rouge">libprunetrie.a</code> library</h4>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>c++ -o ./strip.NEW \
./strip.private.o -L/usr/local/lib -lprunetrie -stdlib=libc++
ld: library not found for -lprunetrie
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [strip.NEW] Error 1
make: *** [all] Error 1
</code></pre></div></div>
<p><strong>Solution is:</strong>
You seem to have forgotten to copy the <code class="language-plaintext highlighter-rouge">libprunetrie.a</code> library to a directory the cctools makefile looks for it. On my Mac it worked fine after copying the file to <code class="language-plaintext highlighter-rouge">/usr/local/lib/</code>.</p>
<h4 id="the-missing-mach-oarmreloch-header-file">The missing <code class="language-plaintext highlighter-rouge">mach-o/arm/reloc.h</code> header file</h4>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>In file included from /Users/mikalv/EfiWorkspace/ld64-274.2/src/other/PruneTrie.cpp:26:
/Users/mikalv/EfiWorkspace/ld64-274.2/src/abstraction/MachOFileAbstraction.hpp:591:10: error: 'mach-o/arm/reloc.h' file not found with <angled>
include; use "quotes" instead
#include <mach-o/arm/reloc.h>
^~~~~~~~~~~~~~~~~~~~
"mach-o/arm/reloc.h"
1 error generated.
** BUILD FAILED **
</code></pre></div></div>
<p><strong>Solution is:</strong> Seems like it requires a header file found in <code class="language-plaintext highlighter-rouge">cctools-895/include/mach-o/arm</code>. The patch for the <code class="language-plaintext highlighter-rouge">ld64.xcodeproj/project.pbxproj</code> is pasted below.</p>Mikal VillaOh no! a lot of text. Well, luckly half of the post is troubleshooting. EFI development setup is easy :)Building the XNU kernel on Mac OS X Sierra (10.12.X)2017-06-17T21:46:00+02:002017-06-17T21:46:00+02:00https://0xcc.re/2017/06/17/building-the-xnu-kernel-on-mac-osx-sierra-10.12<h2 id="introduction-to-xnu-compiling">Introduction to XNU compiling</h2>
<p>From version to version, I always love to play around with the kernel. And it has always been a great lack in guides and documentation on how to build Mac OSX’s kernel, XNU. For those of you that already have tried compiling XNU for Mac OSX 10.12 (Sierra), you probably noticed that earlier build guides like <a href="http://shantonu.blogspot.no/2015/12/building-xnu-for-os-x-1011-el-capitan.html">ssen’s blog - Building xnu for OS X 10.11 El Capitan</a> don’t work anymore. However, many thanks to ssen to put in time to write a guide.</p>
<p>The problem is that Apple introduced something named <a href="https://en.wikipedia.org/wiki/Circular_dependency">Circular dependency</a> with the libdispatch library and the kernel headers. So the order of the build process just got really important.</p>
<p>If you tried compiling yourself, the header <code class="language-plaintext highlighter-rouge">firehose_buffer_private.h</code> might be quite familiar with some errors you hit on the road to success. This is a header from the libdispatch library which forces you to build libdispatch first. Copying the header won’t really help, since you need a static library as well (<code class="language-plaintext highlighter-rouge"> libfirehose_kernel.a</code>). Just to make it a little more hassle, the libdispatch source won’t even be close to compiling before the XNU kernel headers are in place.</p>
<h2 id="get-the-sources">Get the sources</h2>
<p>You need to download in total five sources, including XNU itself. This is version numbers for Mac OSX 10.12.4, which you find at <a href="https://opensource.apple.com/">Apple downloads</a></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl -O https://opensource.apple.com/tarballs/dtrace/dtrace-209.50.12.tar.gz && \
curl -O https://opensource.apple.com/tarballs/AvailabilityVersions/AvailabilityVersions-26.50.4.tar.gz && \
curl -O https://opensource.apple.com/tarballs/libdispatch/libdispatch-703.50.37.tar.gz && \
curl -O https://opensource.apple.com/tarballs/libplatform/libplatform-126.50.8.tar.gz
# To use the XNU package do
curl -O https://opensource.apple.com/tarballs/xnu/xnu-3789.51.2.tar.gz
export XNUPATH=$(pwd)/xnu-3789.51.2
# (Note that using github is experimental, if you break anything, it's not my fault.)
# To use github do
git clone https://github.com/opensource-apple/xnu.git
export XNUPATH=$(pwd)/xnu
</code></pre></div></div>
<p>To extract all files at once, you can run <code class="language-plaintext highlighter-rouge">for file in *.tar.gz; do tar -zxf $file; done && rm -f *.tar.gz</code>. The guide will assume you managed to extract the tar files from now on.</p>
<h2 id="building-time">Building time</h2>
<p>First of all, make sure you got Xcode 8, try to have the last version to encrease the chance it will build without too many errors. Next up you need to make sure you have build tools from command line, you can check this by running <code class="language-plaintext highlighter-rouge">clang -v</code>. If the output is something like <code class="language-plaintext highlighter-rouge">zsh: command not found: clang</code> then you need to install the command line tools. If you followed my instructions and got Xcode 8, this can be easily installed with the command <code class="language-plaintext highlighter-rouge">xcode-select --install</code>.</p>
<p>Next up is to ensure the SDK is correctly set. A good way to check if your SDK path is set correctly is to run <code class="language-plaintext highlighter-rouge">xcrun -sdk macosx -show-sdk-path</code> and check the output.</p>
<p>By the way, if you strugle with this, kernel compiling might be dangerous for your data, since this can break your OS install. At least, test it in a virtual machine.</p>
<h3 id="dtrace">dtrace</h3>
<p>Dtrace is a fantastic debugging tool which you should really lookup if you haven’t yet. It’s worth your time. I also wondered why FreeBSD, Solaris and Darwin (OSX) had dtrace in official source but not Linux, and it seems to be because of some stupid licensing issues.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd dtrace-209.50.12
mkdir -p obj sym dst
xcodebuild install -target ctfconvert -target ctfdump -target ctfmerge ARCHS="x86_64" SRCROOT=$PWD OBJROOT=$PWD/obj SYMROOT=$PWD/sym DSTROOT=$PWD/dst
sudo ditto $PWD/dst/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain
</code></pre></div></div>
<h3 id="availabilityversions">AvailabilityVersions</h3>
<p>This is just a perl script that’s required to build the kernel.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd ../AvailabilityVersions-26.50.4/
mkdir -p dst
make install SRCROOT=$PWD DSTROOT=$PWD/dst
sudo ditto $PWD/dst/usr/local `xcrun -sdk macosx -show-sdk-path`/usr/local
</code></pre></div></div>
<h3 id="xnu---header-install">XNU - Header install</h3>
<p>You will need the XNU headers installed to the SDK directory to be able to build the libdispatch library.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd $XNUPATH
mkdir -p BUILD.hdrs/obj BUILD.hdrs/sym BUILD.hdrs/dst
make installhdrs SDKROOT=macosx ARCH_CONFIGS=X86_64 SRCROOT=$PWD OBJROOT=$PWD/BUILD.hdrs/obj SYMROOT=$PWD/BUILD.hdrs/sym DSTROOT=$PWD/BUILD.hdrs/dst
sudo xcodebuild installhdrs -project libsyscall/Libsyscall.xcodeproj -sdk macosx ARCHS='x86_64 i386' SRCROOT=$PWD/libsyscall OBJROOT=$PWD/BUILD.hdrs/obj SYMROOT=$PWD/BUILD.hdrs/sym DSTROOT=$PWD/BUILD.hdrs/dst
sudo ditto BUILD.hdrs/dst `xcrun -sdk macosx -show-sdk-path`
</code></pre></div></div>
<h3 id="libplatform">Libplatform</h3>
<p>This is also required to build libdispatch. It just need it’s source to be available, so you don’t need to build anything here.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd ../libplatform-126.50.8
sudo ditto $PWD/include `xcrun -sdk macosx -show-sdk-path`/usr/local/include
</code></pre></div></div>
<h3 id="libdispatch">Libdispatch</h3>
<p>At last you can build <code class="language-plaintext highlighter-rouge">libfirehose_kernel.a</code> from the libdispatch library. You won’t build the whole library since you’re only out after <code class="language-plaintext highlighter-rouge">libfirehose_kernel.a</code> and headers.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd ../libdispatch-703.50.37
mkdir -p BUILD.hdrs/obj BUILD.hdrs/sym BUILD.hdrs/dst
sudo xcodebuild install -project libdispatch.xcodeproj -target libfirehose_kernel -sdk macosx ARCHS='x86_64 i386' SRCROOT=$PWD OBJROOT=$PWD/obj SYMROOT=$PWD/sym DSTROOT=$PWD/dst
sudo ditto $PWD/dst/usr/local `xcrun -sdk macosx -show-sdk-path`/usr/local
</code></pre></div></div>
<h3 id="xnu">XNU</h3>
<p>Finally we can build the XNU kernel itself.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd $XNUPATH
# You can choose between RELEASE, DEVELOPMENT or DEBUG, or all.
make SDKROOT=macosx ARCH_CONFIGS=X86_64 KERNEL_CONFIGS="RELEASE DEVELOPMENT DEBUG"
</code></pre></div></div>
<p>Depending on which kernel config you choose, your kernel should be found at either locations.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># If you built DEBUG
file BUILD/obj/DEBUG_X86_64/kernel.debug
BUILD/obj/DEBUG_X86_64/kernel.debug: Mach-O 64-bit executable x86_64
# If you built DEVELOPMENT
file BUILD/obj/DEVELOPMENT_X86_64/kernel.development
BUILD/obj/DEVELOPMENT_X86_64/kernel.development: Mach-O 64-bit executable x86_64
# If you built RELEASE
file BUILD/obj/RELEASE_X86_64/kernel
BUILD/obj/RELEASE_X86_64/kernel: Mach-O 64-bit executable x86_64
</code></pre></div></div>
<p>And there you go, fresh XNU builds. However I choose to end it here, if you wonder how to actually switch it with the pre shipped one, you’ll have to do something like this.</p>
<p>And just to say it one more time, if you do this and you experience data loss, crashes, or your mac turns into a badger and tries to kill you, or any other unexpected events, it’s not my fault.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo cp BUILD/obj/RELEASE_X86_64/kernel /System/Library/Kernels/
sudo kextcache -invalidate /
</code></pre></div></div>
<p>Enjoy your custom XNU!</p>Mikal VillaIntroduction to XNU compiling