<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Plexx Digital]]></title><description><![CDATA[MARKUS SCHNEIDER]]></description><link>https://www.plexx.digital/</link><generator>Ghost 3.5</generator><lastBuildDate>Mon, 24 Mar 2025 14:26:01 GMT</lastBuildDate><atom:link href="https://www.plexx.digital/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Parking Data + EV Data]]></title><description><![CDATA[<p>In a recent <a href="https://www.plexx.digital/napcore-mobility-data-days/">post</a>, I told you about the NAPCORE Mobility Data Days. I was there as both, a supporter of the European Parking Association (<a href="https://www.europeanparking.eu">EPA</a>) as well as with my APDS hat on (APDS: <a href="https://www.allianceforparkingdatastandards.org">Alliance for Parking Data Standards</a>). At the conference, EPA signed a cooperation agreement with the</p>]]></description><link>https://www.plexx.digital/parking-data-ev-data/</link><guid isPermaLink="false">6559db090dafdb0001645ff8</guid><category><![CDATA[APDS]]></category><category><![CDATA[OCPI]]></category><category><![CDATA[EV]]></category><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Sun, 19 Nov 2023 10:12:34 GMT</pubDate><media:content url="https://www.plexx.digital/content/images/2023/11/pexels-kindel-media-9799996.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.plexx.digital/content/images/2023/11/pexels-kindel-media-9799996.jpg" alt="Parking Data + EV Data"><p>In a recent <a href="https://www.plexx.digital/napcore-mobility-data-days/">post</a>, I told you about the NAPCORE Mobility Data Days. I was there as both, a supporter of the European Parking Association (<a href="https://www.europeanparking.eu">EPA</a>) as well as with my APDS hat on (APDS: <a href="https://www.allianceforparkingdatastandards.org">Alliance for Parking Data Standards</a>). At the conference, EPA signed a cooperation agreement with the <a href="https://evroaming.org">EV Roaming Foundation</a>.</p><p>Moving forward, we will work out ways to create smart and deep links between the two standards (APDS and OCPI), because often parking and charging happen at the same time. I will keep you posted.</p>]]></content:encoded></item><item><title><![CDATA[NAPCORE Mobility Data Days]]></title><description><![CDATA[<p>Last week, I attended the NAPCORE Mobility Data Days in Budapest. Three days packed with interesting presentations, workshop sessions, and networking amongst mobility data enthusiasts. For those who don't know yet: <a href="https://napcore.eu">NAPCORE</a> is a cooperation to coordinate and harmonise more than 30 mobility data platforms across Europe. Sharing mobility-related data</p>]]></description><link>https://www.plexx.digital/napcore-mobility-data-days/</link><guid isPermaLink="false">6559d8a60dafdb0001645fd2</guid><category><![CDATA[mobility]]></category><category><![CDATA[open data]]></category><category><![CDATA[eu]]></category><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Sun, 19 Nov 2023 09:49:21 GMT</pubDate><media:content url="https://www.plexx.digital/content/images/2023/11/MDD.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.plexx.digital/content/images/2023/11/MDD.png" alt="NAPCORE Mobility Data Days"><p>Last week, I attended the NAPCORE Mobility Data Days in Budapest. Three days packed with interesting presentations, workshop sessions, and networking amongst mobility data enthusiasts. For those who don't know yet: <a href="https://napcore.eu">NAPCORE</a> is a cooperation to coordinate and harmonise more than 30 mobility data platforms across Europe. Sharing mobility-related data creates new opportunities...and it is more and more also becoming the law in the EU member states. So, if you work in the mobility sector, it is time to learn more. The linked website is a good starting point.</p>]]></content:encoded></item><item><title><![CDATA[Mobilithek]]></title><description><![CDATA[<p>For mobility enthusiasts amongst you who are living and/or working in Germany, there is something you should definitely check out: the <strong>Mobilithek</strong>, an open mobility data exchange platform hosted and maintained by the German government (Federal Ministry for Digital and Transport, BMDV). You can find it here: <a href="https://www.mobilithek.info">https://www.</a></p>]]></description><link>https://www.plexx.digital/mobilithek/</link><guid isPermaLink="false">63b4517d0dafdb0001645f97</guid><category><![CDATA[open data]]></category><category><![CDATA[mobility]]></category><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Tue, 03 Jan 2023 16:08:44 GMT</pubDate><media:content url="https://www.plexx.digital/content/images/2023/01/Mobilithek.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.plexx.digital/content/images/2023/01/Mobilithek.png" alt="Mobilithek"><p>For mobility enthusiasts amongst you who are living and/or working in Germany, there is something you should definitely check out: the <strong>Mobilithek</strong>, an open mobility data exchange platform hosted and maintained by the German government (Federal Ministry for Digital and Transport, BMDV). You can find it here: <a href="https://www.mobilithek.info">https://www.mobilithek.info</a>. It is a B2B platform where data providers and data consumers can find matching connections.</p><p>While it is in an early stage, it is certainly worth a visit, and you will see more and more data being provided there down the road.</p>]]></content:encoded></item><item><title><![CDATA[Meet: SteVe]]></title><description><![CDATA[<p>Folks, please meet <strong>SteVe</strong>. SteVe is a software to manage charge points for electric vehicles. It was developed at the renowned RWTH Aachen university, and it is free to use (open sourced under the GPL license).</p><p>You can use SteVe to test and evaluate your ideas in the realm of</p>]]></description><link>https://www.plexx.digital/meet-steve/</link><guid isPermaLink="false">6354f03a0dafdb0001645f3b</guid><category><![CDATA[mobility]]></category><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Sun, 23 Oct 2022 07:55:09 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1640185750293-cdf0c1032cd4?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDU5fHxNb2JpbGl0eXxlbnwwfHx8fDE2NjY1MTA5Mzc&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1640185750293-cdf0c1032cd4?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwxMTc3M3wwfDF8c2VhcmNofDU5fHxNb2JpbGl0eXxlbnwwfHx8fDE2NjY1MTA5Mzc&ixlib=rb-4.0.3&q=80&w=2000" alt="Meet: SteVe"><p>Folks, please meet <strong>SteVe</strong>. SteVe is a software to manage charge points for electric vehicles. It was developed at the renowned RWTH Aachen university, and it is free to use (open sourced under the GPL license).</p><p>You can use SteVe to test and evaluate your ideas in the realm of e-mobility. SteVe comes with support for a variety of related protocols:</p><ul><li>OCPP1.2S and OCPP1.2.J</li><li>OCPP1.5S and OCPP1.5J</li><li>OCPP1.6S and OCPP1.6J</li></ul><figure class="kg-card kg-image-card"><img src="https://www.plexx.digital/content/images/2022/10/Screenshot-2022-10-23-at-09.44.17.png" class="kg-image" alt="Meet: SteVe"></figure><p>A group of SteVe enthusiasts maintains a curated list of charge point models that have been tested with SteVe. You can find it here: <a href="https://github.com/RWTH-i5-IDSG/steve/wiki/Charging-Station-Compatibility">https://github.com/RWTH-i5-IDSG/steve/wiki/Charging-Station-Compatibility</a> .</p><p>The SteVe team maintains a GitHub repository (<a href="https://github.com/steve-community/steve">https://github.com/steve-community/steve</a>). There, you can find the source code as well as instructions on how to spin up your own SteVe instance.</p><p>For the German-speaking amongst you: the name "SteVe" is a combination of <strong>Ste</strong> (short for "Steckdose", German for "socket") and <strong>Ve</strong> (short for "Verwaltung", German for management).</p><p>Have fun with SteVe!</p>]]></content:encoded></item><item><title><![CDATA[On my own behalf...]]></title><description><![CDATA[<p>A while ago, I joined the APDS team (<a href="https://www.allianceforparkingdatastandards.org">Alliance for Parking Data Standards</a>), a group of people from the parking industry driving interoperability in the realm of mobility. Born as an initiative by major industry associations, it now has become the blueprint for an internal standard (ISO TS 5206-1) and</p>]]></description><link>https://www.plexx.digital/on-my-own-behalf/</link><guid isPermaLink="false">6354ec8d0dafdb0001645efb</guid><category><![CDATA[APDS]]></category><category><![CDATA[parking]]></category><category><![CDATA[Interoperability]]></category><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Sun, 23 Oct 2022 07:33:54 GMT</pubDate><media:content url="https://www.plexx.digital/content/images/2022/10/Screenshot-2022-10-23-at-09.24.08.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.plexx.digital/content/images/2022/10/Screenshot-2022-10-23-at-09.24.08.png" alt="On my own behalf..."><p>A while ago, I joined the APDS team (<a href="https://www.allianceforparkingdatastandards.org">Alliance for Parking Data Standards</a>), a group of people from the parking industry driving interoperability in the realm of mobility. Born as an initiative by major industry associations, it now has become the blueprint for an internal standard (ISO TS 5206-1) and inspired European standards like e.g. DATEX II (published as CEN/TS 16157-6).</p><p>It all started out with a parking data platform project in the UK, and here I am now: head of the APDS change control team. The change control team is your point of contact to suggest adjustments and incremental extensions to the standard. You think something's missing? Just let us know, and we'll work with you.</p>]]></content:encoded></item><item><title><![CDATA[Open Sourcing your Project]]></title><description><![CDATA[<p>Some random thoughts</p><p><strong>Why?</strong></p><p>When it comes to open sourcing a project, they key question should be: <strong><em>“Why do I want to do this?”</em></strong></p><p>Typical reasons are:</p><ul><li>growing a user community to find additional contributors or at least be notified about bugs (drive further project development, get support)</li><li>recruiting (find</li></ul>]]></description><link>https://www.plexx.digital/open-source/</link><guid isPermaLink="false">6229b7a30dafdb0001645ee6</guid><category><![CDATA[open source]]></category><category><![CDATA[coding]]></category><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Thu, 10 Mar 2022 08:35:47 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1618401471353-b98afee0b2eb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fGdpdGh1YnxlbnwwfHx8fDE2NDY5MDEyODE&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1618401471353-b98afee0b2eb?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fGdpdGh1YnxlbnwwfHx8fDE2NDY5MDEyODE&ixlib=rb-1.2.1&q=80&w=2000" alt="Open Sourcing your Project"><p>Some random thoughts</p><p><strong>Why?</strong></p><p>When it comes to open sourcing a project, they key question should be: <strong><em>“Why do I want to do this?”</em></strong></p><p>Typical reasons are:</p><ul><li>growing a user community to find additional contributors or at least be notified about bugs (drive further project development, get support)</li><li>recruiting (find good developers)</li><li>marketing (offer your services based on a reference project)</li></ul><p>A not-so-good reason is: “I just want to throw it over the fence so people can do with it whatever they like.”</p><p><strong>How?</strong></p><p>Another question to answer is: <strong><em>“How exactly do you want to handle/maintain this?”</em></strong></p><p>Once you open source a project that is potentially interesting for others, there is a risk of this act generating overheads for you:</p><ul><li>People might have questions: Who will answer them? Will you allow GitHub issues to be opened?</li><li>People might think they found a bug and expect you to do your own research and potentially fix it.</li><li>There might even be people who would like to actively contribute to the project. Will you want to allow GitHub pull requests? Who will regularly review and assess</li><li>Users/followers of the project might expect to see it continuously maintained, e.g. in terms of updating external dependencies. Who would be the one to do this?</li></ul>]]></content:encoded></item><item><title><![CDATA[k8s Service Discovery]]></title><description><![CDATA[<p>My kubernetes students are moving towards making their pods interact with each other. Along those lines, they need to find a good approach to <strong>service discovery</strong>, i.e. how to find a healthy instance of a service that another service wants to consume. A good way to do (server-side) discovery</p>]]></description><link>https://www.plexx.digital/k8s-service-discovery/</link><guid isPermaLink="false">601d0b21030684000171930d</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[cloud]]></category><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Fri, 05 Feb 2021 09:24:57 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1572314997669-275cf96124fc?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDQ4fHxmaW5kfGVufDB8fHw&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1572314997669-275cf96124fc?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MXwxMTc3M3wwfDF8c2VhcmNofDQ4fHxmaW5kfGVufDB8fHw&ixlib=rb-1.2.1&q=80&w=2000" alt="k8s Service Discovery"><p>My kubernetes students are moving towards making their pods interact with each other. Along those lines, they need to find a good approach to <strong>service discovery</strong>, i.e. how to find a healthy instance of a service that another service wants to consume. A good way to do (server-side) discovery is to make use of k8s'es DNS subsystem. All a  calling service has to do is using the target service's <strong>FQDN</strong> (fully qualified domain name). The structure of a FQDN in k8s is this:</p><!--kg-card-begin: markdown--><p><code>http://{service name}.{namespace}.svc.{cluster}.local:{service port}</code></p>
<!--kg-card-end: markdown--><p>Example: if the service you want to reach out to </p><ul><li>is named "phonebook"</li><li>is running in the "default" namespace</li><li>in cluster "explorer"</li><li>on port 8080</li></ul><p>...then the fully qualified domain name to use in your http request would be</p><!--kg-card-begin: markdown--><p><code>http://phonebook.default.svc.explorer.local:8080</code></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Minikube Tweaks]]></title><description><![CDATA[<p>I am tutoring some people starting with <em>kubernetes</em>. On their local machines, they're using <em>minikube</em>, and it lets them explore a lot of k8s features without the immediate need of a full-fledged cluster. My students do a lot of research themselves, but occasionally they get back to me with questions</p>]]></description><link>https://www.plexx.digital/untitled-2/</link><guid isPermaLink="false">601cf5d0030684000171929e</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[cloud]]></category><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Fri, 05 Feb 2021 07:59:04 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1513020783145-a4d0ac835406?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDh8fGN1YmV8ZW58MHx8fA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1513020783145-a4d0ac835406?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MXwxMTc3M3wwfDF8c2VhcmNofDh8fGN1YmV8ZW58MHx8fA&ixlib=rb-1.2.1&q=80&w=2000" alt="Minikube Tweaks"><p>I am tutoring some people starting with <em>kubernetes</em>. On their local machines, they're using <em>minikube</em>, and it lets them explore a lot of k8s features without the immediate need of a full-fledged cluster. My students do a lot of research themselves, but occasionally they get back to me with questions they can't seem to find an answer for. In order to not loose those questions, I am writing them down here. I will keep updating this post as they go.</p><h3 id="loadbalancer-services">LoadBalancer Services</h3><p>One team member  has started to experiment with exposing services running inside her minikube to the outside world. She defined an object of <em>kind</em> <strong>Service</strong>, and in the spec part, she configured it to be of <em>type</em> <strong>LoadBalancer</strong>. She was expecting this to eventually assign an external ip. However, the <strong>EXTERNAL-IP</strong> element remained in state <strong>&lt;pending&gt;</strong>:</p><figure class="kg-card kg-image-card"><img src="https://www.plexx.digital/content/images/2021/02/image-1.png" class="kg-image" alt="Minikube Tweaks"></figure><p> Well, in this case <strong>minikube tunnel</strong> is your friend. Open another terminal window, execute the "minikube tunnel" command, and voilà:</p><figure class="kg-card kg-image-card"><img src="https://www.plexx.digital/content/images/2021/02/image-2.png" class="kg-image" alt="Minikube Tweaks"></figure><p>You now have an external ip (it will go away as soon as you close the tunnel window). Please note that this is specific to the use of <em>minikube</em>. When you're using the managed k8s environment of a public cloud provider such as AWS, Google, Microsoft or DigitalOcean, they will automatically handle everything required for a  "real" <em>LoadBalancer</em> <em>Service</em> type.</p><h3 id="minikube-dns">Minikube DNS</h3><p><em>Minikube</em> comes with DNS activated by default. So, most people wont' have to do anything, and it will work out of the box. A team member approached me because he wasn't that lucky. He is using a machine running <em>Mac OS</em>, and is using <em>VirtualBox</em> for virtualisation. In this combination, you need to patch a setting for your virtual machine. </p><p>Let's assume your VM is named "minikube" (this is the default, check it via <strong>minikube status</strong>). Open a terminal window and run the following commands:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">minikube stop
VBoxManage modifyvm minikube --natdnshostresolver1 on
minikube start
</code></pre>
<!--kg-card-end: markdown--><p>This will do the trick and make DNS available to all your objects. For those who are curious about the details: you can find them <a href="https://forums.virtualbox.org/viewtopic.php?f=7&amp;t=50368">here</a> and <a href="https://superuser.com/questions/641933">here</a>.</p><h3 id="exposing-a-minikube-ingress">Exposing a Minikube Ingress</h3><p>Next up was the attempt to expose the (training) <em>Minikube</em> so it was - temporarily - accessible from other machines on the local network. As this was not within the scope of the training and was only supposed to be temporary in nature, we used the <code>kubectl port-forward</code> command to accomplish this:</p><pre><code class="language-bash">kubectl port-forward \
    --address=0.0.0.0 \
    --namespace=kube-system \
    deployment/ingress-nginx-controller 80:80</code></pre><p>You should see a response similar to this one here:</p><pre><code class="language-bash">Forwarding from 0.0.0.0:80 -&gt; 80</code></pre><p>As the response indicates, this will forward all traffic coming in via the <em>Minikube</em> host's network adapter on port 80 to the ingress controller listening on port 80 of our <em>Minikube</em>. Whenever an incoming request is handled, you'll see a corresponding log line:</p><pre><code class="language-bash">Handling connection for 80</code></pre><h3 id="logging">Logging</h3><p>In the pre-k8s era, the team already had been using <em>Elasticsearch</em> and <em>Kibana</em> for centralized logging. The <em>Docker</em> containers (at the time) would then use the <em>fluentd</em> (td-agent) log driver to contact a locally-installed log shipping agent which in turn would send all received messages to the <em>Elasticsearch</em> instance. The team wanted to keep using <em>Elasticsearch/Kibana</em>, so the question was simple how to accomplish this. One of the team members did some digging and came across this article: <a href="https://mherman.org/blog/logging-in-kubernetes-with-elasticsearch-Kibana-fluentd/">https://mherman.org/blog/logging-in-kubernetes-with-elasticsearch-Kibana-fluentd/</a> . Another (similar) post can be found here: <a href="https://medium.com/trendyol-tech/forwarding-kubernetes-containers-logs-to-elasticsearch-with-fluent-bit-and-showing-logs-with-411587e54e22">https://medium.com/trendyol-tech/forwarding-kubernetes-containers-logs-to-elasticsearch-with-fluent-bit-and-showing-logs-with-411587e54e22</a></p>]]></content:encoded></item><item><title><![CDATA[(Mobile) Application Security]]></title><description><![CDATA[<p>Whenever a team is tasked with the development of a mobile application, one topic often is greatly underrated or simply not dealt with for other reasons like laziness or a deep dislike of related activities: <strong>application security</strong>.</p><p>Of course, somehow everyone knows that appropriate application security is a must-have, and</p>]]></description><link>https://www.plexx.digital/mobile-application-security/</link><guid isPermaLink="false">5fe9a76a8d58510001a1437e</guid><category><![CDATA[mobile]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Mon, 28 Dec 2020 09:48:40 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1553816078-c95b2e4c8ef8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDQwfHxCcm9rZW4lMjBXaW5kb3d8ZW58MHx8fA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1553816078-c95b2e4c8ef8?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MXwxMTc3M3wwfDF8c2VhcmNofDQwfHxCcm9rZW4lMjBXaW5kb3d8ZW58MHx8fA&ixlib=rb-1.2.1&q=80&w=2000" alt="(Mobile) Application Security"><p>Whenever a team is tasked with the development of a mobile application, one topic often is greatly underrated or simply not dealt with for other reasons like laziness or a deep dislike of related activities: <strong>application security</strong>.</p><p>Of course, somehow everyone knows that appropriate application security is a must-have, and sometimes a team just can't find a good approach on how to deal with it. Let me give you some snippets of inspiration in this regard here:</p><p>Regardless of the application technology chosen (web, hybrid, cross-platform), a mobile device must always be considered insecure. While not prominently visible to the end user, the communication between the (mobile) frontend and the backend needs to be secured against common and application-specific vulnerabilities.</p><h3 id="threats-catalogue">Threats catalogue</h3><p>Before a team can take measures in terms of application security, potential threats first need to be identified. This is typically done by means of compiling a threats catalogue listing potential threats from different categories like e.g.</p><ul><li>Organizational shortcomings</li><li>Human failure</li><li>Technical failure</li><li>Force majeur (if applicable)</li></ul><p>Keep in mind that security risks can be induced both, externally and internally. Also, they can be deliberate or accidental in nature. All those factors need to be considered when putting together the list of potential threats.</p><h3 id="security-mechanisms-features">Security mechanisms/features</h3><p>Once an initial version of the threats catalogue has been created, the team can go over it and see what can be done to address which risk. It helps to prioritize corresponding tasks by factors like “severity of consequences” and “likeliness to happen”.</p><h3 id="security-is-not-a-one-time-task">Security is not a one-time task</h3><p>It goes without saying that the topic of security must accompany the entire life cycle of an application. It is not something you get done once and forever.</p><ul><li>Every iteration shall have a task to answer the question “are the planned changes/enhancements likely to induce new security risks?”</li><li>As a team, make sure to define a recurring task “threats catalogue update/revision”.</li></ul><p>Obviously, those are just some examples of what should be taken into consideration. It is always a good idea to talk to other teams and find out how they have organized their work in this regard.</p><p>Stay secure! 🔐</p>]]></content:encoded></item><item><title><![CDATA[Software Product Management]]></title><description><![CDATA[<p>Jez Humble (Twitter: <a href="https://twitter.com/jezhumble">@jezhumble</a>) teaches a class on software product management at UC Berkeley's. For 2020 he recorded all his lectures, and you can find everything available for free (licensed CC-BY-SA) <a href="https://docs.google.com/document/d/1DELwxJzR7NLVE8gNXBloGMEfZvJwaaWUn6mIoUDKA2o/edit#">here</a> (it is a Google document with links to YouTube video clips).</p><p>I personally like especially this one <a href="https://www.youtube.com/watch?v=aLtylScwsG0">here</a></p>]]></description><link>https://www.plexx.digital/software-product-management/</link><guid isPermaLink="false">5fc61f388d58510001a14369</guid><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Tue, 01 Dec 2020 10:51:43 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1507415492521-917f60c93bfe?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDEwfHx8ZW58MHx8fA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1507415492521-917f60c93bfe?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MXwxMTc3M3wwfDF8c2VhcmNofDEwfHx8ZW58MHx8fA&ixlib=rb-1.2.1&q=80&w=2000" alt="Software Product Management"><p>Jez Humble (Twitter: <a href="https://twitter.com/jezhumble">@jezhumble</a>) teaches a class on software product management at UC Berkeley's. For 2020 he recorded all his lectures, and you can find everything available for free (licensed CC-BY-SA) <a href="https://docs.google.com/document/d/1DELwxJzR7NLVE8gNXBloGMEfZvJwaaWUn6mIoUDKA2o/edit#">here</a> (it is a Google document with links to YouTube video clips).</p><p>I personally like especially this one <a href="https://www.youtube.com/watch?v=aLtylScwsG0">here</a> which deals with OKRs and helps to focus on the right stuff...</p>]]></content:encoded></item><item><title><![CDATA[Docker, iptables and ufw]]></title><description><![CDATA[<p>In case you are using Docker on Linux systems (pretty much the standard), it is important to be aware of the fact that Docker is is using <em>iptables</em> rules to provide network isolation. This is something that usually happens automatically under the hood, so many people - like me -</p>]]></description><link>https://www.plexx.digital/docker-and-iptables/</link><guid isPermaLink="false">5f22ce738d58510001a1427e</guid><category><![CDATA[docker]]></category><category><![CDATA[hosting]]></category><category><![CDATA[cloud]]></category><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Thu, 30 Jul 2020 14:23:05 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1574790859109-865174a1ceda?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1574790859109-865174a1ceda?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Docker, iptables and ufw"><p>In case you are using Docker on Linux systems (pretty much the standard), it is important to be aware of the fact that Docker is is using <em>iptables</em> rules to provide network isolation. This is something that usually happens automatically under the hood, so many people - like me - don't dive into the details of what's going on there. The other day however, I came across the fact that I probably should have given some more attention to how this actually works.</p><p>If you're using one of the major cloud providers (AWS, Azure, Google, IBM, ...), they will provide you with a "real" firewall one way or the other, and that's a good thing. If however you are self-hosting a web-facing machine for test purposes, you might be using <em>ufw</em>, the "uncomplicated firewall". This is an easy-to-use tool that lets you set up policies concerning which protocols from which sources are allowed to reach out to which port on particular network interfaces. When it comes to Docker, you should be aware of one important fact:</p><h1 id="docker-does-not-honor-ufw-rules">Docker does not honor ufw rules</h1><p>Let's say e.g. you have a Docker container on your web-facing test server that is running NGINX, and you used the  <code>-p 8080:8080</code> parameter to make this http server available outside of the container listing on port 8080. Now, the web - i.e. everyone - will be able to connect to this NGINX instance via <code>http://yourdomain:8080/</code>. Let's assume for a minute that you decide to make this endpoint unavailable to the public. Easy, he? You ssh to your machine, and a one-liner will do the job for you:</p><p><code>sudo ufw deny 8080</code></p><p>Mission accomplished. Are you sure? Go ahead, open the browser on your desktop machine, and navigate to <code>http://yourdomain:8080</code>...and you will see that <strong>NXGINX is still serving contents to the public</strong>. How is this even possible 😵??  You did close port #8080. Well, as mentioned earlier: Docker is manipulating <em>iptables</em> rules, but it doesn't give a sh.. about <em>ufw</em>, at least not out of the box.</p><p>Docker typically sets up two custom <em>iptables</em> chains. They're named <code>DOCKER</code> and <code>DOCKER-USER</code>. The <em>DOCKER</em> chain is used by Docker itself, and you don't want to mess with it. If you have rules/policies that you want to see applied, you should add them to the <em>DOCKER-USER</em> chain. The default behaviour is that <u>all external source IP addresses are allowed to connect to your Docker host</u>.</p><p>So, if you want to only allow external traffic from a particular IP address, you will have to add a corresponding rule to the <em>DOCKER-USER</em> chain:</p><!--kg-card-begin: markdown--><p><code>iptables -I DOCKER-USER -i eth0 ! -s 192.168.1.11 -j DROP</code></p>
<!--kg-card-end: markdown--><p>(this is assuming that your host's external network interface is named <em>eth0</em>, and the only source IP you want to allow is <em>192.168.1.11</em>).</p><p>One final note: some posts in various blogs and support sites suggest you go and turn off <em>iptables</em> manipulation by Docker (there is a key named <em>iptables</em> in the Docker configuration that defaults to <em>true</em> and can be set to <em>false</em>). This is not a good idea, as it will also turn off a lot of the wanted behaviour.</p><p>Hope this helps.</p>]]></content:encoded></item><item><title><![CDATA[Keycloak Realm Export/Import]]></title><description><![CDATA[<p>I have been using <strong>keycloak</strong> as my identity management solution for a couple of years now, and I have yet to see a different OSS solution that might make me consider a change.</p><p>In integration testing, staging and production systems, I am using a keycloak docker container with a <strong>postgresql</strong></p>]]></description><link>https://www.plexx.digital/keycloak-realm-export-import/</link><guid isPermaLink="false">5eff272204226400017cc162</guid><category><![CDATA[cloud]]></category><category><![CDATA[operations]]></category><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Fri, 03 Jul 2020 13:04:57 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1561648348-f1ac236ae254?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1561648348-f1ac236ae254?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Keycloak Realm Export/Import"><p>I have been using <strong>keycloak</strong> as my identity management solution for a couple of years now, and I have yet to see a different OSS solution that might make me consider a change.</p><p>In integration testing, staging and production systems, I am using a keycloak docker container with a <strong>postgresql</strong> companion container holding the data. While the keycloak admin console does offer export/import functionality to a certain extent, it is limited: users cannot be exported, neither can secrets (passwords etc.). There is however a keycloak-provided way of accomplishing this.</p><p>When starting up, keycloak checks for a bunch of system properties that control migration actions, export and import in particular. The following settings will make keycloak export all realms, client and user settings inclusive of their passwords:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">-Dkeycloak.migration.action=export
-Dkeycloak.migration.provider=singleFile
-Dkeycloak.migration.file=/export/kcdump.json
</code></pre>
<!--kg-card-end: markdown--><p>Upon the next start-up, keycloak will write all details out to a file named <em>/export/kcdump.json</em> (in this example). There is also an alternative where you can have keycloak write a separate file per realm and another separate file for the users in a realm. If that's what you want (my favorite), you will have to go with the <em>dir</em> provider instead of the <em>singleFile</em> option, and you'll have to configure a directory name rather than a file name. This is how it looks like:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">-Dkeycloak.migration.action=export
-Dkeycloak.migration.provider=dir
-Dkeycloak.migration.dir=/export
</code></pre>
<!--kg-card-end: markdown--><p>That way, you will end up with files <em>master-realm.json</em>, <em>master-users-0.json</em>, <em>someother-realm.json</em>, <em>someother-users-0.json </em>(if e.g. you have two realms named <em>master</em> and <em>someother</em>).</p><p>The process of importing from above json files into a fresh keycloak database will most likely not surprise you:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">-Dkeycloak.migration.action=import
-Dkeycloak.migration.provider=singleFile
-Dkeycloak.migration.file=/export/kcdump.json
-Dkeycloak.migration.strategy=OVERWRITE_EXISTING
</code></pre>
<!--kg-card-end: markdown--><p>I think this an elegant way to "clone" a setup. Furthermore: json files are good candidates to be customized using an automated deployment tool such as e.g. Ansible. That's what I ususally do: I create jinja2 templates from the json realm definitions, customize them by injecting the variable values I need and then spin up a keycloak container that imports the customized files into the database. The official keycloak documentation for this feature can be found here: <a href="https://www.keycloak.org/docs/6.0/server_admin/#_export_import">https://www.keycloak.org/docs/6.0/server_admin/#_export_import</a>.</p>]]></content:encoded></item><item><title><![CDATA[Kafka: getting rid of a Topic the hard Way]]></title><description><![CDATA[<p>I am a big fan of Kafka, and I am using it a lot to connect my domains and services with scalable and performant asynchronous communication. Over the time, I sometimes run into a situation where I need to get rid of a topic that has become obsolete. In some</p>]]></description><link>https://www.plexx.digital/kafka-getting-rid-of-a-topic/</link><guid isPermaLink="false">5ee2609f04226400017cc12c</guid><category><![CDATA[cloud]]></category><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Thu, 11 Jun 2020 17:00:10 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1502602903514-eca7c59f29dc?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1502602903514-eca7c59f29dc?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Kafka: getting rid of a Topic the hard Way"><p>I am a big fan of Kafka, and I am using it a lot to connect my domains and services with scalable and performant asynchronous communication. Over the time, I sometimes run into a situation where I need to get rid of a topic that has become obsolete. In some (rare) cases, topics are "sticky", and even though I ordered a deletion, they somehow remain. </p><p>This is where <strong>zookeeper-shell</strong> comes into play. Let's assume for this example that I </p><ul><li>have a zookeeper instance running on a server named <strong>zookeeper-1</strong>, listening on port <strong>2181</strong>. </li><li>let's further assume that I have a topic named <strong>NOTNEEDEDANYMORE</strong> that I want to get rid of.</li><li>let's finally assume that the normal way of deleting the topic didn't work (for whatever reason)</li></ul><p>In this special situation, the following two commands will do the job:</p><pre><code class="language-bash">$ zookeeper-shell zookeeper-1:2181
# now I am connected to the shell
[] rmr /brokers/topics/NOTNEEDEDANYMORE
[] rmr /admin/delete_topics/NOTNEEDEDANYMORE
# by now, the topic is gone
# quit the shell
[] quit
$
</code></pre>]]></content:encoded></item><item><title><![CDATA[Cassandra Quickstart]]></title><description><![CDATA[<p>I don't know why exactly, but when it comes to high-performance no-SQL databases with clustering capabilities, many people seem to exclude Apache's <em>Cassandra</em> from their list of candidates. This might be a mistake. Cassandra has been around for quite a while now, so it is safe to say that it</p>]]></description><link>https://www.plexx.digital/cassandra-quickstart/</link><guid isPermaLink="false">5e8b56d16fc52b0001a5bca3</guid><category><![CDATA[cloud]]></category><category><![CDATA[database]]></category><category><![CDATA[docker]]></category><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Mon, 06 Apr 2020 16:35:46 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1513758173941-bfbd2e4166f5?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1513758173941-bfbd2e4166f5?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Cassandra Quickstart"><p>I don't know why exactly, but when it comes to high-performance no-SQL databases with clustering capabilities, many people seem to exclude Apache's <em>Cassandra</em> from their list of candidates. This might be a mistake. Cassandra has been around for quite a while now, so it is safe to say that it is a mature product fit for production. One good proof for this is the fact that Apple is running 25,000+ Cassandra nodes.</p><p>So, time to at least take a look. The following script expects to be run in a Linux environment with pre-installed Docker. It sets up a cluster with three nodes (granted, on the same machine, but hey, we're just getting started here).</p><!--kg-card-begin: markdown--><pre><code class="language-bash"># cassandra/run.sh
#
# setup script for a docker based cassandra cluster
#

# settings
CASSANDRA_VERSION=&quot;3.11.6&quot;
NAME_TEMPLATE=&quot;cassandra_node&quot;
NUMBER_OF_NODES=3
MEMORY_PER_CONTAINER=&quot;4g&quot;
CLUSTER_NAME=&quot;plexx&quot;
VOLUME_BASE=&quot;/opt/cassandra/node&quot;
STARTUP_DELAY=60

# network: create own network if it doesn't exist already
CASSANDRA_NETWORK=`docker network ls --format=&quot;{{ .Name }}&quot; |grep cassandra`
if [ -z &quot;${CASSANDRA_NETWORK}&quot; ]
then
	echo &quot;cassandra network doesn't exist yet, so create it&quot;
	docker network create cassandra
else
	echo &quot;cassandra network already present&quot;
fi

# stop and remove any existing containers (data will NOT be deleted)&quot;
for node in $(seq $NUMBER_OF_NODES)
do
	CONTAINER_NAME=&quot;${NAME_TEMPLATE}${node}&quot;
	echo &quot;checking container $CONTAINER_NAME ...&quot;
	NODE_IS_RUNNING=$(docker ps --format=&quot;{{ .Names }}&quot; |grep &quot;$CONTAINER_NAME&quot;)
	if [ -n &quot;$NODE_IS_RUNNING&quot; ]
	then
		echo &quot;container ${CONTAINER_NAME} is running, stop it&quot;
		docker stop &quot;$CONTAINER_NAME&quot;
	else
		echo &quot;container $CONTAINER_NAME is not running&quot;
	fi
	NODE_EXISTS=$(docker ps -a --format=&quot;{{ .Names }}&quot; |grep &quot;$CONTAINER_NAME&quot;)
	if [ -n &quot;$NODE_EXISTS&quot; ]
	then
		echo &quot;remove pre-existing container $CONTAINER_NAME&quot;
		docker rm &quot;$CONTAINER_NAME&quot;
	fi
done

# make sure host volumes exists
for node in $(seq $NUMBER_OF_NODES)
do
	MOUNT_POINT=&quot;${VOLUME_BASE}${node}&quot;
	if [ -d &quot;$MOUNT_POINT&quot; ]
	then
		echo &quot;mount point at $MOUNT_POINT exists&quot;
	else
		echo &quot;mount point at $MOUNT_POINT does not exist yet, so create it&quot;
		sudo mkdir -p &quot;$MOUNT_POINT&quot;
	fi
done

# create and run the MASTER container
echo &quot;create and start master&quot;
docker run -d \
--name &quot;${NAME_TEMPLATE}1&quot; \
--network=cassandra \
--memory $MEMORY_PER_CONTAINER \
-e CASSANDRA_CLUSTER_NAME=&quot;$CLUSTER_NAME&quot; \
-v &quot;${VOLUME_BASE}1:/var/lib/cassandra&quot; \
cassandra:&quot;$CASSANDRA_VERSION&quot;

# give the master some time to start up
echo &quot;waiting $STARTUP_DELAY seconds...&quot;
sleep &quot;$STARTUP_DELAY&quot;

# determine IP address of master
MASTER_IP=&quot;$(docker inspect --format='{{ .NetworkSettings.Networks.cassandra.IPAddress }}' ${NAME_TEMPLATE}1)&quot;
echo &quot;master node IP address is $MASTER_IP (will be used as seed)&quot;

# all the other nodes 
for node in $(seq 2 $NUMBER_OF_NODES)
do
	echo &quot;create and start node #$node&quot;
	docker run -d \
	--name &quot;${NAME_TEMPLATE}${node}&quot; \
	--network=cassandra \
	--memory $MEMORY_PER_CONTAINER \
	-e CASSANDRA_CLUSTER_NAME=&quot;$CLUSTER_NAME&quot; \
	-e CASSANDRA_SEEDS=&quot;$MASTER_IP&quot; \
	-v &quot;${VOLUME_BASE}${node}:/var/lib/cassandra&quot; \
	cassandra:&quot;$CASSANDRA_VERSION&quot;
done

echo &quot;done.&quot;
</code></pre>
<!--kg-card-end: markdown--><p>To verify that everything went well, issue the following command from your terminal command line: <code>docker exec -it cassandra_node1 bash -c 'nodetool status'</code>.  This should generate some output similar to this:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address     Load     Tokens  Owns (effective)  Host ID                               Rack
UN  172.18.0.2  326.58 KiB  256  100.0%            668b272b9fea  rack1
UN  172.18.0.3  319.04 KiB  256  100.0%            1bb5e1c153b7  rack1
</code></pre>
<!--kg-card-end: markdown--><p>A status of "UN" stands for <strong>Up</strong> and <strong>Normal</strong>. This is what you want. Note: Cassandra needs memory. Anyhting below the 4g that I configured in my script most likely will make it refuse to start up. Swap is fine, though.</p><p>Finally, here are the ports used by / to be configured for Cassandra:</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>port number</th>
<th>category</th>
<th>description</th>
</tr>
</thead>
<tbody>
<tr>
<td>7000</td>
<td>inter-node ports</td>
<td>inter-node cluster communication</td>
</tr>
<tr>
<td>7001</td>
<td>inter-node ports</td>
<td>SSL inter-node cluster communication</td>
</tr>
<tr>
<td>7199</td>
<td>inter-node ports</td>
<td>JMX monitoring port</td>
</tr>
<tr>
<td>9042</td>
<td>client ports</td>
<td>client connect port</td>
</tr>
<tr>
<td>9160</td>
<td>client ports</td>
<td>client connect port (Thrift)</td>
</tr>
<tr>
<td>9142</td>
<td>client ports</td>
<td>default for <code>native_transport_port_ssl</code></td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>I will follow up with a second post showing how to connect and talk to your new cluster using <em>Go</em>.</p>]]></content:encoded></item><item><title><![CDATA[Distribute and scale using CockroachDB]]></title><description><![CDATA[<p>A robust database engine is the foundation of many applications, no matter where they are running. There is a vast variety of options you have to choose from. But when it comes to horizontal scaling and resilience, things quickly get complicated. Clustered databases with read and write access typically are</p>]]></description><link>https://www.plexx.digital/distribute-and-scale-using-cockroachdb/</link><guid isPermaLink="false">5e79bfea6fc52b0001a5bc53</guid><category><![CDATA[cloud]]></category><category><![CDATA[database]]></category><category><![CDATA[operations]]></category><category><![CDATA[docker]]></category><dc:creator><![CDATA[Markus Schneider]]></dc:creator><pubDate>Tue, 24 Mar 2020 08:40:46 GMT</pubDate><media:content url="https://www.plexx.digital/content/images/2020/03/IMG_1E472F7CFEBD-1.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.plexx.digital/content/images/2020/03/IMG_1E472F7CFEBD-1.jpeg" alt="Distribute and scale using CockroachDB"><p>A robust database engine is the foundation of many applications, no matter where they are running. There is a vast variety of options you have to choose from. But when it comes to horizontal scaling and resilience, things quickly get complicated. Clustered databases with read and write access typically are no low-hanging fruits and often are reserved for the enterprise versions of commercial database products. Well, I have good news for you: there is a company named <a href="https://www.cockroachlabs.com">Cockroach Labs</a> that is offering a database claiming to do exactly this (i.e. scale and distribute). And while there are commercial plans to run a managed cluster on AWS and/or Google, the self-hosted variant is free to use, event for commercial purposes. CockroachDB comes with the BSL (Business Source License), a model that is mostly generous unless you are planning to offer the product in a database-as-a-service mode. This is excluded, and the reason is that the company wants to protect themselves from being taken over by the big guys (Azure, AWS, Google) and losing this type of business to them. This has happened before with other popular open source database products.</p><p>While this obviously doesn't make a lot of sense for a production environment, you can easily spin up a CockroachDB cluster on one machine using Docker. This is what I will show you in this post. </p><h2 id="prerequisites">Prerequisites</h2><p>I won't go over the details of installing docker and docker-compose. There are many good tutorials out there on how to do this. So, I am assuming that you have a test machine with some flavour of Linux and docker-compose installed.</p><h2 id="starting-the-cluster">Starting the Cluster</h2><p>Here's the docker-compose.yml file I used for my little test:</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">version: &quot;3&quot;
services:
  cockroach1:
    image: cockroachdb/cockroach
    command: start --insecure
    ports:
      - &quot;8080:8080&quot;
    volumes:
      - ./data/cockroach1:/cockroach/cockroach-data
    networks:
      cockroachnetwork:
        aliases:
          - cockroach1

  cockroach2:
    image: cockroachdb/cockroach
    command: start --insecure --join=cockroach1
    volumes:
      - ./data/cockroach2:/cockroach/cockroach-data
    depends_on:
      - cockroach1
    networks:
      cockroachnetwork:
        aliases:
          - cockroach2

  cockroach3:
    image: cockroachdb/cockroach
    command: start --insecure --join=cockroach1
    volumes:
      - ./data/cockroach3:/cockroach/cockroach-data
    depends_on:
      - cockroach1
    networks:
      cockroachnetwork:
        aliases:
          - cockroach3

networks:
  cockroachnetwork:
    driver: bridge
</code></pre>
<!--kg-card-end: markdown--><p>Fire it up by running <code>docker-compose up</code>, and there you have your clustered database. You can now attach to any of your three nodes and execute SQL statements. The command <code>docker-compose exec cockroach1 ./cockroach sql --insecure</code> will take you to a prompt where you can enter and run standard SQL commands. You can connect to <strong>any</strong> of the nodes, and your modifications will be replicated to all others.</p><h2 id="nice-specialties">Nice Specialties</h2><p>I am relatively new to CockroachDB, too. So, I don't know how exactly they are doing it, but there are some specialties in this product that come in very handy.</p><h3 id="no-proprietary-drivers">No proprietary Drivers</h3><p>CockroachDB does not come with its own set of drivers. Instead, its makers decided to establish compatibility with PostgreSQL. So, in your projects, you can just pretend to be using a Postgres database while in reality using CockroachDB.</p><h3 id="no-master">No Master</h3><p>There is no real master. All nodes are equally important...or replaceable. Try stopping one of the nodes (e.g. via  <code>docker stop cockroach1</code>), make some changes and restart the node. You will find all modifications to be replicated in the resurrected node, too.</p><p>I haven't done extensive error scenario testing yet, but I certainly will and will keep you posted.</p>]]></content:encoded></item></channel></rss>