How to write in a S7 1515 PLC from fishes Observers?

Hi All,

I’m encountering an issue while writing data into a Siemens S7 1515 PLC. Let me explain : I have an application “A” looping on some business data then emitting, let’s say, 20 Actyx events on 20 fishes (same fish description but with 20 different ID). I have then an application “B” observing a state changes on those 20 fishes. When a specific state change is detected, the application “B” is writing 50 data into the PLC for the current fish. Because all state changes are received in parallel in an asynchronous way the application B is trying to write a lot of data at the same time so the PLC is overloaded and seems to reject a lot of write operation. Not good…

My first try was to use the node-snap7 library and for every single data to write do this : connect/write one data/disconnect. PLC reject some random write operation.

2nd try was to use the node-snap7 library again but group some data (20 max) to be written in a single shot. So do this : connect/write 20 data/disconnect. PLC reject some random write operation.

So now I’m trying to use, as advised by people from Actyx, the nodeS7 library. With library I need to rewrite my code to use a single long lived connection to write all the data, it’s ok for me. But now how to handle the asynchronous state changes I will receive on the 20 fishes to be able to write all the data in the single long lived connection ?

Thanks a lot,

Seb

Hey there,

without looking at the code, my first idea would be to buffer the data in application B and then take from that buffer and write to the PLC in intervals it can handle.
The buffer could be a simple array within B or something like a Queue.

By decoupling the receive and write concerns, you get the possibility to control the timing of writes.

Does this make any sense in your scenario?

Hi Wolfgang,

It was also my idea to use a FIFO queue (internal Queue ? MQTT Broker ? To be defined…). Did you already did this kind of implementation to handle this situation or is it something you never encountered ?

Thanks,

Seb

Not that I’m aware of, sorry. Perhaps @roland has seen something similar in other projects?

I’d be hesitant to pull in an additional piece of infrastructure as we already have all relevant data persistent within Actyx. However, we need to look into error cases, most importantly probably what happens if the application crashes before having processed all writes.

I’d evaluate how large the buffer would get given a certain amount of data and write frequency and what would happen in the error case. Depending on the outcome we may see it’s highly unlikely that this happens and that it’s an acceptable risk. Otherwise, I suggest looking into the code together, I think it’s hard to come up with a solution without additional context.

What do you think?

Ok to have a look on the code together but I would like first to do more implementation/test on my side. I will contact you when I’m ready on discord for this if it’s ok for you ?

Seb

1 Like

Great, just give me a ping.