Here is an unedited transcript of the webinar “How to Use InfluxDB to Visualize and Monitor MQTT Messages in an IIoT System”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
- Caitlin Croft: Customer Marketing Manager, InfluxData
- Till Seeberger: Software Engineer, HiveMQ
- Anja Helmbrecht-Schaar: Senior MQTT & Architecture Consultant, HiveMQ
Caitlin Croft: 00:00:04.366 Welcome to today’s webinar. My name is Caitlin Croft. I am very excited to be joined by our friends from HiveMQ where we will be talking about how to use InfluxDB to visualize and monitor MQTT messages in an IoT system. Once again, please post any questions you may have for our speakers in the chat or the Q and A. I will be monitoring both. And without further ado, I’m going to hand things off to Anja and Till.
Anja Helmbrecht-Schaar: 00:00:38.659 Yeah, thanks, Caitlin. And yeah, thank you for inviting us to hold this webinar today. Yeah, and hello to everyone. I hope you can see my screen. So the first slide here. And so then we will start and tell you today how we are using InfluxDB and the Influx dashboard on one side to monitor our metrics from HiveMQ, and on the other side, how we can monitor really the data from sensors or from IIoT devices. Also with the tools from InfluxDB and Influx dashboard and HiveMQ. So for this, we prepared two demos to show you this. And I’m also not alone here. So also my colleague, Till Seeberger, is with me in this webinar today. And Till will show us the demo for the HiveMQ metrics. And Till is a HiveMQ engineer. Yeah, he’s working on improvements of the HiveMQ broker. And beside this, he’s also maintaining our nice tool, and MQT command line interface so for MQTT messaging.
Anja Helmbrecht-Schaar: 00:02:11.130 My name is Anja Helmbrecht-Schaar. I’m a senior consultant and I am working for HiveMQ since about 4 years. And I am supporting customers in the application, the specific implementation of HiveMQ extensions and also in the integration of HiveMQ in the customer’s system landscape. Whoops. That’s the Google slides, they are sometimes a little bit fast. Yeah, so let me really short introduce our company. So we are setting near Munich and so this is our headquarter. We founded 8 years or 9 years now ago, and with our products and our expertise, our main job is to help moving the data to and from connected devices in an efficient, fast and reliable manner. Today, we have more than 130 customers and these customers have HiveMQ in production environments running. And we have really very, very different use cases. So from the customers that where MQTT is working for them. Our main product is the HiveMQ MQTT broker. It’s an enterprise MQTT broker also with some open source edition. And this broker is really for high availability and fast and scalable business critical IoT applications build. And we support 100% the MQTT protocol, and also in both or in the three major versions that are relevant in the MQTT world. And we also support, for example, if you have different devices that can only run different implementations of MQTT, this is something that our MQTT broker is also supporting. We can interoperate the communication between MQTT clients with different versions.
Anja Helmbrecht-Schaar: 00:04:35.984 So MQTT is one of the most important things here in this webinar. And that’s why I’m giving you a brief introduction of the history. It’s really not the newest protocol, I would say. It’s founded for more than 20 years or invented for more than 20 years by Andy Stanford-Clark from IBM and Arlen Nipper. And the reason was that they needed a protocol that is really minimized and related to network bandwidth and device and resource requirements. And that’s why they invented MQTT. And 3 years later, it gets a candidate for standardization from Oasis. And then in October 2014, MQTT3 became an Oasis standard. And 4 years later also MQTT5 started with an initial release.
Anja Helmbrecht-Schaar: 00:05:40.010 And now we have all these two versions available. And with the MQTT5 Oasis standard, HiveMQ also provides an edition, an open source edition that is able to speak MQTT5 as well as MQTT3. Yeah. And today it can be stated MQTT is really the de facto standard for machine-to-machine communication in the Internet of Things. So the main thing — or the key features of MQTT is maybe not everybody is aware and it’s not working always with MQTT is the publish-subscribe pattern. And these publish-subscribe pattern allows the decoupling of sender and receiver. MQTT is a binary protocol that is very simple and lightweight. And it supports states with the session concept. And another thing that is also important is that MQTT has a dynamic topic concept that means participants that subscribing to a topic — So only in this moment where we’re subscribed, one participant or one MQTT client to a topic, the topic exists and it must not be preconfigured. It must not be created in a different way or something like this. It’s really completely dynamic.
Anja Helmbrecht-Schaar: 00:07:20.186 So just for recap, this is how that pops up pattern looks like. In the middle we have always the MQTT broker. And on the right side, for example, there are some clients that subscribe to a specific pattern, a topic. And when they have subscribed, then they get automatically each message that is published from another MQTT client and, yeah, they get then the message. So here you can see the sender and the receiver must not know each other. And they are completely independent.
Anja Helmbrecht-Schaar: 00:08:03.102 So let’s go back to our HiveMQ MQTT platform. So today we only we have really a couple of editions and tools around, but today we will only look on the open source site, because the things that we will show you today is all these things are available or doable with our open source and our community editions. We have our HiveMQ broker that is publicly available, and we have an extension framework. And with this extension framework, it is possible to extend the functionality of MQTT broker by your own business logic or business application functionality. And with this extensibility, we will work also today in this webinar.
Anja Helmbrecht-Schaar: 00:09:03.671 Beside this, we also need our really, really brand new test tool. It’s called HiveMQ Swarm. And HiveMQ Swarm is something — it’s a tool that allows you to simulate MQTT clients. And you can really run this in very high dimensions. You can simulate million of MQTT devices with this, and also some — only some — so for the open source or the public, the community edition is available just to run this with smaller scenarios. But you have the full functionality available. And with this customization for payload and security, you can really simulate your MQTT environment. And so let’s go back a little bit more to InfluxDB. Also, our test tool is supporting InfluxDB, so you can also — so also the metrics from our benchmark tool can also be reported to InfluxDB. And so the third thing. So we need the HiveMQ core edition, we need the benchmark tool, and we need something from our marketplace. And so one of these extensions that we have and the extension for monitoring, we have, for example, also an InfluxDB extension. This is the extension that is also used in our HiveMQ cloud solution. So we also provide a HiveMQ hosted solution for those they don’t want to host HiveMQ by themselves.
Anja Helmbrecht-Schaar: 00:10:55.156 And here we also use the InfluxDB extension, and also with InfluxDB 2.0 and with an Influx dashboard. Yeah. And for our customers, this is one of the most widely used extension. And we also create dashboards sometimes or help customers to create dashboards for when they use the InfluxDB extension. And yeah, for our demo, we will also use this. And now, I would like to switch to Till over, because Till will show us how you can use this, how you can configure this and how the day was looking like. So it’s yours, Till.
Till Seeberger: 00:11:41.546 Okay, and can you lend me the screen there?
Anja Helmbrecht-Schaar: 00:11:44.120 I can. Yeah, I can stop mine. Okay.
Till Seeberger: 00:11:49.088 Okay. Okay. So now you should be able to see my screen. Okay, so first of all, thank you, Anja, for your quick introduction. And also a warm welcome from me, from my side. And also thank you for your interest in HiveMQ and InfluxDB. And next up, we will have a quick look at the monitoring opportunities for HiveMQ. So the first thing which you might come in contact with if you are trying to monitor your HiveMQ instance, would be the so-called HiveMQ control center. And this control center is basically simple dashboard, which displays the most important HiveMQ metrics. So, for example, connections and inbound and outbound published rates and also, for example, subscriptions. But you also have like a control plane integrated in the control center. So you might also be able to disconnect clients or manage subscriptions in the control center itself. But if you need a more customizable experience for your user, for a use case or a more custom dashboard in general, you might have to use InfluxDB and our InfluxDB extension to push all our HiveMQ metrics to an InfluxDB database. And then you can monitor your HiveMQ instance by using a graph on a dashboard on top. But since the release of InfluxDB 2.0, I think, they introduced this really nice InfluxDB UI with with which you don’t need a graph on a dashboard anymore. So no third party solution needed, just HiveMQ and InfluxDB, which is pretty nice.
Till Seeberger: 00:13:37.042 And you also have many other solutions which you could use for monitoring, for example, the REST Interface, Prometheus, and Splunk streaming. So what metrics does HiveMQ actually offer? So in HiveMQ, we have around 1,000 predefined metrics available. And these can be pushed to InfluxDB by using an InfluxDB extension, but you can also access them programmatically via an extension. And with our Extension SDK. And this is all based around the metrics, and the Dropwizard metrics framework. And this allows you to dynamically extend metrics with your own metrics. Yeah, and also access our predefined metrics and programmatically. So Dropwizard offers you these five metric types. So gauge, timer, counter, meters and histogram. So a really commonly used and one of the most important metrics, I would argue, would be the current total number of active MQTT connections. So all metrics, all predefined HiveMQ metrics are prefixed with com.hivemq. And followed with the specific metric name. So, for example, networking connections current and, yes, I already said with returns you the total number of active MQTT connections. And this you would typically visualize by using a line graph and aggregate this over the last 10 seconds. And if you now see a significant unsuspected drop of this same value in your dashboard, in your graph, this might hint to a problem in your infrastructure, and this could lead you to analyzing a specific time frame where you want to go into your further analysis.
Till Seeberger: 00:15:36.838 So for a list of all our available metrics, you can go to our user guide and just look up the monitoring part. So now let’s see how you would access a metrics programmatically via our extension SDK. As I already told you, we used the Dropwizard framework, and this basically exposes us a so-called metric registry. And we fill this metric registry by our own predefined metrics. But you can also access this metric registry with our “Services” API, and then you can access the specific metric which may be predefined by us by using the getMetrics get method. But you can also now add your own metrics and your own custom extension and by, for example, using the timer function to create a timer which depends on what metric you want to add. And then you would set your metric by using the specific functions for those metrics. So for example here, .time or .stop. And then if you have an InfluxDB extension configured, this metric would also be sent to the InfluxDB.
Till Seeberger: 00:16:51.550 So as Anja already told you, an extension is basically a simple program written in Java, which extends Hive MQ broker functionality, and with this you can seamlessly link in your own business logic to events, messages and content that is processed by HiveMQ. So a common use case would be to maybe intercept the connect packets. So if an MQTT client connects to the HiveMQ broker and you put intercept this and in this case, maybe add your own metric for specific details you want to read out from this connect packet. And also a comprehensive documentation, and examples for HiveMQ extensions in general, and how to use the extension SDK can be also found in our documentation on our website. So as Anja already told you, the HiveMQ InfluxDB extension is a really commonly used extension, and you can — So it basically just takes the metrics from Dropwizard with that and pushes them to your InfluxDB and you can get this extension, and from our marketplace under the download with the download button. And this will basically give you a zip file, which you just need to unzip. And this is basically the extension for the unit.
Till Seeberger: 00:18:16.335 You can also download and clone and our GitHub repository and build the whole extension yourself or use it to do specific changes for your use case and for the extension itself. So after this, you might need to configure your InfluxDB extension and therefore you would just copy the unzipped folder into the HiveMQ extension home folder. And after that, you might need to configure your HiveMQ InfluxDB extension by using the properties file called InfluxDB.properties and there you could set your things like the host, the port or maybe your authentication token and which you need to authenticate to InfluxDB. So, yeah, now let’s see. And now let’s put this all together and see a short demo on this. So here you see I will put up a HiveMQ community edition broker, and install an InfluxDB extension to it. Then I will use this broker to periodically push the metrics from HiveMQ to an InfluxDB bucket called HiveMQ. And then I also will show you a small dashboard with the most important HiveMQ community addition metrics for this. And yeah, if everything is set up, I will simulate a small MQTT scenario which will publish with a rate of around 1,000 publishers per second, and will also receive around 1,000 publishes per second from a HiveMQ community edition broker.
Till Seeberger: 00:20:01.298 So yeah. Now let’s get into this demo. As you see here in my finder, I’ve already downloaded all the things I need. So the HiveMQ Community Edition, the InfluxDB Extension, HiveMQ Swarm and also I’ve already preconfigured my own dashboard, which I will import, and a comment to run InfluxDB which we will now have a look at. So this is just a plain, simple docker command, which will run an InfluxDB on port 1886 and will set up InfluxDB with a specific username and password, and also initialize an organization and a bucket for it. And as you can see here, we are using an InfluxDB 2.0 because we use it for the dashboards. So we will do this part first by using the right command. And this might take some seconds to start, but as soon as InfluxDB has started, we should be able to go to localhost 1886 and access our InfluxDB. So now we need to authenticate our InfluxDB extension from our HiveMQ edition to InfluxDB and therefore we need authentication token. And for simplicity, I will just use the admin’s token for now. So I copy and paste this admin’s token, and now I can configure my HiveMQ InfluxDB extension.
Till Seeberger: 00:21:44.185 So as I already told you, we go into the influxdb.properties file, therefore, and have to insert our authentication token right here at the authentication property. And you might need to change little things for your specific deployment. For example, if you’re not using localhost, obviously, you might need to add your host here. But that’s it for the configuration for us for now. And now we will open a new HiveMQ community edition folder and there you see an extensions folder. And in this extensions folder, we will just drop in a HiveMQ InfluxDB extension. And that’s the whole installation process for an extension in HiveMQ. So now we should be able to run our HiveMQ instance and by using the run shell script located in the bin folder. So this might also take some seconds. And there you see that the extension InfluxDB monitoring extension started successfully and also our HiveMQ started successfully in around five seconds.
Till Seeberger: 00:23:02.323 So now we get this part. Now we can go back to our InfluxDB and explore the data HiveMQ is already sending to InfluxDB. And here you see that this bucket HiveMQ has created, and that all the metrics HiveMQ community edition offers is sent to InfluxDB. So, for example, as we have seen, I might want to have a look at the MQTT connections. So I go to this metric. Called com.hivemq.networking.connections.current, and then I would look at them in the last five minutes. And there you should see that I currently have zero connections on my MQTT broker. So now let’s change this, and we will just quickly put up the MQTT CLI with which we can simulate MQTT clients, go into the so-called shell mode. And in this shell mode, we can use a connect command to simply connect MQTT client to a broker. So I type connect, and it connected me and I add MQTT client to localhost, to our localhost broker. And you should see. But this is also quite fast and reflected in our InfluxDB dashboard. So I might also want to disconnect it to prove that this metric also goes down. And here you see that this is reflected.
Till Seeberger: 00:24:41.428 Okay, now let’s close MQTT CLI and get to the last part of this demo. So I have already preconfigured a dashboard which we will now import. So it’s just a plain simple JSON, which we can import by going to boards and then using the import dashboard button here, pasting out JSON in this panel. So there you see it, HiveMQ dashboard was successfully imported. And here you now see a simple dashboard for MQTT metrics. So MQTT connections, the total amount of publishes which were received by HiveMQ, the total publishes HiveMQ sent, and the subscription which are currently present on HiveMQ, also the rate of incoming publishes and outgoing publishes and also some networking traffic. So the incoming bytes per second, outgoing bytes per second and the total bytes read from the network. So, as I already promised, we will now simulate a small scenario by using HiveMQ Swarm, and I’ve already preconfigured the scenario itself. So as I said, we will publish with the rate of 1,000 publishes per second and we will receive 1,000 publishes per second also to HiveMQ Swarm.
Till Seeberger: 00:26:11.599 So to start HiveMQ Swarm, we just execute the binary HiveMQ Swarm in the win folder and it should just take a few seconds to start. Here you see that stage one started and it’s in progress. And this was basically connecting all our clients. And stage two is now publishing, basically. And if I now update to the last five minutes and set the refresh into five seconds, you should see that the scenario is in progress. So you see that we have 500 clients connected to our broker and their publishes is incoming, more and more publishes incoming, and also publishes outgoing. So half of our MQTT connections are subscriptions, so we have 250 subscriptions and which receive outgoing publishers from HiveMQ.
Till Seeberger: 00:27:04.002 So also in these two metrics, and these two panels, you see that we have our current incoming rate of the promised 1,000 publishes per second, and also an outgoing rate of 1,000 publishes per second. And here you see the current traffic which is generated on HiveMQ. So you see that 30 kilobytes per second are currently written to HiveMQ. And HiveMQ is writing 15 kilobytes per second currently. Okay, so that’s it for the demo. I hope you learned something new from this. And Anja, feel free to take over anytime.
Anja Helmbrecht-Schaar: 00:27:47.693 Okay, thanks Till. So this is my screen, hopefully this is it. And so our next topic is that we — I hate this, that we will talk about how, yeah, is the way to visualize IIoT data in the now kind of generic way. That’s the point here, maybe. And when you look at Classico or the status quo at IIoT systems, you have often some of these challenges here. So maybe you have still a client server architecture where you have many integration points and a couple of devices. And with this you have maybe devices and endpoints that have different topic payloads and different data structures. Maybe the data agnostic is given, the payload must be also [inaudible]. But there is not always the context available for this. And also the applications, assuming some specific formats and some structures that have to be available.
Anja Helmbrecht-Schaar: 00:29:15.243 So this is something that is today often the case, and with this it is really hard to implement something that our unique approach to visualize your data. But why this is the case and there are a group has built up — and to think about how it could be better to have to use MQTT, and on top of MQTT define something that makes it easier to work and have access to all the data from the devices in IIoT infrastructure. And this is, for sure, not only for the metrics — it’s a point. It’s also for all the data and what you want to do with this data. It is really important. And for this, the Sparkplug group was built up. I think it was last year. And this project is hosted under the Eclipse organization. And the idea is that they use MQTT. And on top of MQTT, they try to build a simple and open specification that has exactly these targets that you can really have interoperability between the IIoT devices and the applications, that it is really easy to maintain and easy to handle. And to come to this target, they defined three things. So there are three major things in the end specification.
Anja Helmbrecht-Schaar: 00:31:05.121 The first thing is that they define a unique topic namespace so that all the participants in this infrastructure knows the namespaces and they have a kind of ontology around, and that is well known for all the participants. The second thing is that they use a unique — or the Sparkplug defines a data model and structure, so that all the components know these — they know how to interpret the payload and they know how to build the payload and how that — and what kind of payload are available. So because there is simply a schema defined and with a data model that fulfills the conditions to share all the data between the devices. And the third part of this Sparkplugs specification is that it shows a mechanism how the MQTT estates can be handled and can be managed. And so that in the infrastructure, every device, every participant that is interested in the state of a device can get the information about these devices. Is it online or not, and so on.
Anja Helmbrecht-Schaar: 00:32:31.013 And this specification or these concepts will be built on top of MQTT with the concept of — So it’s based on MQTT3. Maybe there are some things, some MQTT5 features that would be really brilliant to fit, but yeah. To have more interoperability, the MQTT3 is now the base. And they use the concept of the last will and testament. And so the last will and testament, this is a concept that you send during the connect, a kind of predefined message that will be sent out if the device goes offline. And they also use the mechanism of retained messages. And so a retained message is a message that stays on the topic. It’s only one message that stays on the topic. And for each participant that is subscribing to this topic, he will at first get this retained message. And with this, you can really manage an online/offline state. So this is the classical way in MQTT to manage the status information for devices. And then Sparkplug, so it’s not looking at security and some of these things because this is something that can all be handled with MQTT and with the latest TCP IP security technologies. So this is everything is provided by MQT itself and this specification is this open, and it’s standardized so that you have no vendor lock-in.
Anja Helmbrecht-Schaar: 00:34:28.910 So when we have these things all together, and we look to our infrastructure or our architecture, then the architecture looks completely different — not completely different. We have the same participants and also these HiveMQ or the MQTT broker at least, which is the central component. And we have here now our old participants, now a little bit reordered, so we have here the SCADA IIoT host that is the SCADA system represented in this architecture. And this is now interacting directly with the MQTT broker as a specific MQTT client. And you have here other MQTT — or other applications that have at least an MQTT client inside that it is able to read and communicate with Sparkplug specification. And here on the other side, you have Etch nodes. These are nodes that are Sparkplug enabled, and also are responsible for devices and sensors that maybe have not a Sparkplug implementation available and communicate with other protocols. And these Etch nodes are working then as gateways to forwards and backwards information for the devices.
Anja Helmbrecht-Schaar: 00:36:03.161 So when we look a little bit deeper into the Sparkplug, because we will use in our second demo a Sparkplug, and that’s why I have to introduce this a little bit more. I have here, this is our Sparkplug topic structure because it’s the predefined topic structure. And this is a snippet from a Sparkplug payload in JSON interpretation. So the payload is protobuf because it has a very small footprint, but it has a schema that can be very, very dated. And the typical piece of the payload is this metric piece here, where you have a name and a timestamp and at least a value. And when you look into the kind of data flow from an Etch node to the broker, so these things has been done. You have a connect message that has this death certificate inside as last will and testament, and then you publish the online state with the birth certificate, and the Etch node has to subscribe to some command, message types that are represented by this topic structure. And yeah, with these certificates, it is possible, if a connection lost is there, that all the other participants get this information, if this is necessary or if they are interested in. And if the Etch node is available again, then the data from the devices itself that are behind the Etch node and the own data can be published. And yeah, as I said, we are using for the messages, this retained feature from MQTT, and here is other subscriptions for this example.
Anja Helmbrecht-Schaar: 00:38:19.231 So what we also need is a kind of Sparkplug extension. So we have started to implement Sparkplug extension, and this extension will be also available in the near future for public access. And so the first step is that our Sparkplug extension is implementing the starting and the stopping methods to have access to our HiveMQ metrics service. And the second thing is that — or the main part is that we created a kind of inbound listener so that each incoming message MQTT published message can be listened to. And depending on the payload and on the topic structure, the specific message can be, yeah, put into a metric, into our HiveMQ metrics object and with this then the metric can be visualized. It was our InfluxDB extension and with an InfluxDB dashboard.
Anja Helmbrecht-Schaar: 00:39:32.032 So this is what I would like to show you now. So I think the setup is nearly the same, but we have also this Sparkplug extension here. And our scenario is a scenario that has that payload generated that stimulates Sparkplug payload and, yeah, is setting up so that a kind of Sparkplug scenario is built up. So this is a short or a brief description of what the scenario is doing. We have here two Etch nodes that are connecting and subscribing to these topics here to command topics of their [inaudible] and of the command topics of the devices. And here are the devices. So the devices are not MQTT clients. But they we are simulating that the clients, these Etch node MQTT clients sending the data from them. And on the other side, we have this kind of SCADA host simulated that is publishing the state and is subscribing to the whole group of these set up. And our Etch nodes published and data to the specific topics. Yeah. So let me show you how this looks in detail.
Anja Helmbrecht-Schaar: 00:41:14.262 Yeah, this is our extension, and as I said, it’s, I think, some weeks, maybe one or two months, and this extension is also available on our marketplace. And we have here, so the main part is here that we have the extension start and to stop. So it’s really easy. Maybe you would like to try it by your own, and build your own extension. So that’s really not so hard. And this is inbound publisher. Yeah, inbound published interceptor that is reading all the published packets and getting the payload and the topic structure, and depending on some information about the topic structure. So we are validating this. And if the payload is present, then we put the payload into our Sparkplug object, or we pass the payload into a Sparkplug object. And then we can get access to metrics from our Sparkplug interface.
Anja Helmbrecht-Schaar: 00:42:26.418 And so the Sparkplug schema is here. You see here there is the metric objects, for example, with name, alias, timestamp and the data type. So these are the things that we access to check what kind of data this is. And then we accessing the value and put this into one specific, yeah, metrics holder. So this is what we put then here. And then if it’s an integer, we put the integer into long and double and so on. So that’s all what happens. So now I have running my Sparkplug extension, where my HiveMQ with the Sparkplug extension here in the background. And I’m starting my scenario, so a short look here, this is how you can describe a scenario. Maybe it’s not too much too much time to explain. But we really use the setup of the Sparkplug specification, how the Etch nodes in their environment published data. And then, yeah, then I’m running this scenario. And when this starting, you can see on our dashboard, so it’s the same setup that we have our InfluxDB available and now you see we have our two Etch nodes connected. So they are online. They have sent these birth certificates and for each of them, so five devices per Etch node. All the devices have sent their own online status.
Anja Helmbrecht-Schaar: 00:44:21.578 And as you can see, I have simulated some devices that send temperature data and some devices that send some level data. And I have also a device that is sending some power data. And so this is really random data, because I have no real devices in the background. But yeah, as you can see, this is an easy way to have a generic approach on getting the real data from the devices here. On our HiveMQ control center, you could look into the — you have topic structure and so on, yeah. Yeah, so this is, I think, more or less all, and now I’m — Yeah, we are open for questions.
Caitlin Croft: 00:45:17.601 Awesome, thank you. So a lot of people had a lot of questions for you guys around Sparkplugs. So I think you covered this, but does HiveMQ support the Sparkplug B version of MQTT?
Anja Helmbrecht-Schaar: 00:45:36.403 Yeah, so as I said, we — so MQTT is from the nature, is totally agnostic, but to really work with Sparkplug, you need a kind of Sparkplug extension or something like this. So from the MQTT side, HiveMQ is supporting everything because we support MQTT 100%. But the way how I denote this right now, so that you have a Sparkplug extension, this is something that we will have — So you can build this by your own with all the things from the Sparkplug group. But we will also have such an extension available in the next few weeks on our marketplace and as partly available.
Caitlin Croft: 00:46:26.614 Awesome.
Anja Helmbrecht-Schaar: 00:46:28.154 I hope this is answering the question.
Caitlin Croft: 00:46:32.263 What is the maximum rate for publishing points per second using HiveMQ?
Anja Helmbrecht-Schaar: 00:46:39.152 This is a really complex question because it depends totally on the amount of clients, on the quality of services, maybe, on the — Yeah, on the size of your HiveMQ cluster. And so we have implementations or infrastructures running where we have really millions of devices connected, and we have also things running so where the latency is really something that is very important. But I don’t know what number you listen. So having 20,000 messages per second is something that can be handled without any problem in a quite standard setup in the cluster. So this is what we often see, I would say. This is so. Oh, Till, do you want to add something?
Till Seeberger: 00:47:43.519 Yeah. Just wanted to add that there’s no specific limit in HiveMQ for the publishes per second, actually. So it really depends on your machine and what your specific use case can actually handle.
Caitlin Croft: 00:47:57.995 Does the protobuf payload size impact the number of points?
Anja Helmbrecht-Schaar: 00:48:07.931 So. We have also customers that — so the Portagraf, I think this is not a really heavy payload because it is, yeah, protobuf is compiled at the end or binary. And this is then not so much. And we have also customers that are sending really much, much bigger messages around. This is not something where we have some doubts on it. Absolutely not.
Caitlin Croft: 00:48:48.828 Right. The code that you used for the demo — will you be able to share it with us? I know a few people were asking for that.
Anja Helmbrecht-Schaar: 00:48:58.243 Yeah. What code is the question? And so the Swarm, HiveMQ Swarm is something that is now available, public available. And the scenario is something that I could send over. But yeah, the system. And but the extension is something that is, as I said, in some weeks also available on our marketplace and also for free. So that’s not the point.
Caitlin Croft: 00:49:35.942 Perfect. Yes, someone was asking if this Sparkplug extension is an open source project, so I’m assuming it is and will be made available soon.
Anja Helmbrecht-Schaar: 00:49:45.852 Mm-hmm.
Caitlin Croft: 00:49:46.303 Okay. Right. Let’s see. Someone was also asking about the dashboard that you showed. Is that available on the marketplace?
Till Seeberger: 00:49:59.391 So, yeah, both of our dashboards which we have shown and the demos are currently not available anymore, as far as I’m concerned. Yeah, but we might be able to also release a dashboard anywhere in the future, I think. Right, Anja?
Anja Helmbrecht-Schaar: 00:50:20.694 Yeah, sure. So normally, so when the customer has to use HiveMQ and InfluxDB, they have often very different needs what is on the dashboard and whatnot. And that’s why we — so this is not on our list that we provide dashboards, but we do this very often for customers and also some kind of standard dashboards that can be also available. Yes.
Caitlin Croft: 00:50:53.273 Right. What happens to the data if the network is disconnected for a few minutes? Is the data stored locally by HiveMQ in this case?
Anja Helmbrecht-Schaar: 00:51:08.652 Yeah. Do you want to answer, Till? I [crosstalk] steal you the show.
Till Seeberger: 00:51:14.542 Yeah, yeah, it depends. So there is an MQTT feature called a Persistent Session, which you might use. And so that data is stored at the broker, and you can receive it if your client reconnects later. So, yeah, this would be a feature you might want to check out regarding this question.
Anja Helmbrecht-Schaar: 00:51:34.058 Yeah, and this belongs also to the concept of Sparkplug. So what MQTT features should be used. And this is also one point. So there are features inside that is not each broker or so-called broker is really able to provide these features. So for example, retained messages is something that is not every broker is able to provide. And that’s why I explained this with these 100% MQTT features in the beginning, because you need all the features for Sparkplug for full implementation.
Caitlin Croft: 00:52:19.863 Right. So there were a few questions. I know you kind of covered a little bit, but do you mind just covering briefly again the connection between HiveMQ, Sparkplug, and then getting the Sparkplug data into InfluxDB?
Anja Helmbrecht-Schaar: 00:52:38.402 So by the way, we are also working or working to link it together also with the Sparkplug group. So there’s a really deep coupling of knowledge and we are really interested to build these Sparkplug extension and improve this also to have this fully available. And yeah, InfluxDB, this is something that is in the IIoT environment, something that seems to be set for the most use cases.
Caitlin Croft: 00:53:26.812 Okay. Wow, that was a lot of questions around Sparkplugs. So I appreciate you guys answering all of them. Thank you, everyone, for joining. We’ll stay on here just for another minute or two more if you guys have any more last minute questions. I just want to remind everyone once again that we have InfluxDays coming up on May 18th and 19th. The conference itself is completely free. And we also have a hands-on Flux training on May 10th and 11th. There is a fee attached to the Flux training. This is just so we can ensure a really great student-to-teacher ratio. And then on May 17th, we have our free Telegraf training coming back. So we offered the Telegraf training for the first time last fall, and it was very successful. And so we’re offering it again, and we’ve actually increased the number of available seats.
Caitlin Croft: 00:54:30.454 So be sure to check out the InfluxDays website and register for it. We’re super excited to see you all there. It’s a lot of fun. It’s always interesting having the large-scale virtual events and I got to say, our InfluxDB community is amazing. Last year, yes, obviously we would have rather seen everyone in person, but we definitely had a lot of fun during the conference and over Slack, so glad to see everyone there. Thank you, everyone, for joining once again this session has been recorded. And the recording and the slides will be made available later today.
Anja Helmbrecht-Schaar: 00:55:11.798 Okay.
Caitlin Croft: 00:55:15.427 Thank you.
Anja Helmbrecht-Schaar: 00:55:15.716 Thanks. Bye.
Till Seeberger: 00:55:17.633 Thanks.