Quantcast

Quickfix taking huge memory when network latency is high

classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Quickfix taking huge memory when network latency is high

Vipin Chaudhary
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



Hi Team,

I am facing memory issue with QuickfixJ. 
When we send too many messages (when we send backlog data) to clients our Acceptor application takes very high memory and GC is unable to clear memory.

On further analysis I found that when we send message to a session(session.send(message)) messages is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of messages. Now in case we produce mesages too fast then this queue size increase.

This lead to high memory uses, which lead to out of memory.

I am thinking to block the send call when queue size is more than a threshold, for this I need to update the IoSessionResponder.java so that I can access this MinaQueue Size.
This will need to rebuild the QuickFixj

Is there any other way to better handling this scenerio.

QuickFix provide MaxScheduledWriteRequests but when this threshold exceed then quickfix disconnect the consumer. I don't want disconnection but want that this call should block untill WriteMessageQueue size reduced below a particular threshold.

Thanks
Vipin

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Christoph John
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/


Hi,

IMHO there is no other way to either increase your heap mem (how much heap do you have configured
now?) or to change the behaviour in the code. You could also create a pull request with your
solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
the code base.

Cheers,
Chris.


On 02/05/17 12:23, Vipin Chaudhary wrote:

> Hi Team,
>
> I am facing memory issue with QuickfixJ.
> When we send too many messages (when we send backlog data) to clients our Acceptor application
> takes very high memory and GC is unable to clear memory.
>
> On further analysis I found that when we send message to a session(session.send(message)) messages
> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
> messages. Now in case we produce mesages too fast then this queue size increase.
>
> This lead to high memory uses, which lead to out of memory.
>
> I am thinking to block the send call when queue size is more than a threshold, for this I need to
> update the IoSessionResponder.java so that I can access this MinaQueue Size.
> This will need to rebuild the QuickFixj
>
> Is there any other way to better handling this scenerio.
>
> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
> disconnect the consumer. I don't want disconnection but want that this call should block untill
> WriteMessageQueue size reduced below a particular threshold.
>
> Thanks
> Vipin

--
Christoph John
Development & Support
Direct: +49 241 557080-28
Mailto:[hidden email]
       


http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------
       
----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: +49 241 557080-0 | Fax: +49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------
       
----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Guido Medina
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



Another solution for things like streaming prices is to set tick times, some clients have good connections and others bad, we do accumulate prices and send them every X millis,
and we send only if they have changed, checking all that in memory; assuming each instruments has an integer ID; is very fast check on a hash map.

Also we have a split of instruments / ticks where only instrument ID mod (%) ticks are sent at one specific milli, now you need to make sure your kernel scheduler is at it,
that is to say that if you are using Linux you a 4.x kernel and make sure your GC pauses are small or don't use a "stop the world gc"

Example tick timers for 5 ticks:

- Tick 1 will send instrument IDs 1, 6, 11, 16, ...where (ID - 1) % 5 = 0 <- this timer will run at millis 0, 5, 10, etc
- Tick 2 will send instrument IDs 2, 7, 12, 17, ...where (ID - 1) % 5 = 1 <- this timer will run at millis 1, 6, 11, etc
- Tick 3 will send instrument IDs 3, 8, 13, 18, ...where (ID - 1) % 5 = 2 <- this timer will run at millis 2, 7, 12, etc
- Tick 4 will send instrument IDs 4, 9, 14, 19, ...where (ID - 1) % 5 = 3 <- this timer will run at millis 3, 8, 13, etc
- Tick 5 will send instrument IDs 5, 10, 15, 20, ...where (ID - 1) % 5 = 4 <- this timer will run at millis 4, 9, 14, etc

Another way would be to use a non-blocking bounded intermediary queue, see JC tools implementations, that way when your intermediary queue is full you drop messages silently,
maybe such implementation can be added as a plugin to QFJ, I will take a look and see what can be done, either way go JIRA and create the ticket so that we can follow up with ideas.

Every client wants you to send every update all the time, unfortunately most clients can't even handle such "in their imagination" requirement,
accumulating and sending is usually the way to go for thins like this, by doing this you will be doing a favor to both parties and have a healthy system.

HTH,

Guido.

On Wed, May 3, 2017 at 8:05 AM, Christoph John <[hidden email]> wrote:
QuickFIX/J Documentation: <a href="http://www.quickfixj.org/documentation/ QuickFIX/J" rel="noreferrer" target="_blank">http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/


Hi,

IMHO there is no other way to either increase your heap mem (how much heap do you have configured
now?) or to change the behaviour in the code. You could also create a pull request with your
solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
the code base.

Cheers,
Chris.


On 02/05/17 12:23, Vipin Chaudhary wrote:
> Hi Team,
>
> I am facing memory issue with QuickfixJ.
> When we send too many messages (when we send backlog data) to clients our Acceptor application
> takes very high memory and GC is unable to clear memory.
>
> On further analysis I found that when we send message to a session(session.send(message)) messages
> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
> messages. Now in case we produce mesages too fast then this queue size increase.
>
> This lead to high memory uses, which lead to out of memory.
>
> I am thinking to block the send call when queue size is more than a threshold, for this I need to
> update the IoSessionResponder.java so that I can access this MinaQueue Size.
> This will need to rebuild the QuickFixj
>
> Is there any other way to better handling this scenerio.
>
> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
> disconnect the consumer. I don't want disconnection but want that this call should block untill
> WriteMessageQueue size reduced below a particular threshold.
>
> Thanks
> Vipin

--
Christoph John
Development & Support
Direct: <a href="tel:%2B49%20241%20557080-28" value="+4924155708028">+49 241 557080-28
Mailto:[hidden email]



http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: <a href="tel:%2B49%20241%20557080-0" value="+492415570800">+49 241 557080-0 | Fax: <a href="tel:%2B49%20241%20557080-10" value="+4924155708010">+49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Guido Medina
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



If you are using Oracle JVM 8, say JDK 8u131 your best GC will be G1GC, example GC parameters that we use:
-Xms1g -Xmx1g -server -XX:+UseG1GC -XX:+PerfDisableSharedMem -XX:+ParallelRefProcEnabled

Read the following article about PerfDisableShareMem http://www.evanjones.ca/jvm-mmap-pause.html

HTH,

Guido.

On Wed, May 3, 2017 at 9:15 AM, Guido Medina <[hidden email]> wrote:
Another solution for things like streaming prices is to set tick times, some clients have good connections and others bad, we do accumulate prices and send them every X millis,
and we send only if they have changed, checking all that in memory; assuming each instruments has an integer ID; is very fast check on a hash map.

Also we have a split of instruments / ticks where only instrument ID mod (%) ticks are sent at one specific milli, now you need to make sure your kernel scheduler is at it,
that is to say that if you are using Linux you a 4.x kernel and make sure your GC pauses are small or don't use a "stop the world gc"

Example tick timers for 5 ticks:

- Tick 1 will send instrument IDs 1, 6, 11, 16, ...where (ID - 1) % 5 = 0 <- this timer will run at millis 0, 5, 10, etc
- Tick 2 will send instrument IDs 2, 7, 12, 17, ...where (ID - 1) % 5 = 1 <- this timer will run at millis 1, 6, 11, etc
- Tick 3 will send instrument IDs 3, 8, 13, 18, ...where (ID - 1) % 5 = 2 <- this timer will run at millis 2, 7, 12, etc
- Tick 4 will send instrument IDs 4, 9, 14, 19, ...where (ID - 1) % 5 = 3 <- this timer will run at millis 3, 8, 13, etc
- Tick 5 will send instrument IDs 5, 10, 15, 20, ...where (ID - 1) % 5 = 4 <- this timer will run at millis 4, 9, 14, etc

Another way would be to use a non-blocking bounded intermediary queue, see JC tools implementations, that way when your intermediary queue is full you drop messages silently,
maybe such implementation can be added as a plugin to QFJ, I will take a look and see what can be done, either way go JIRA and create the ticket so that we can follow up with ideas.

Every client wants you to send every update all the time, unfortunately most clients can't even handle such "in their imagination" requirement,
accumulating and sending is usually the way to go for thins like this, by doing this you will be doing a favor to both parties and have a healthy system.

HTH,

Guido.

On Wed, May 3, 2017 at 8:05 AM, Christoph John <[hidden email]> wrote:
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J
Support: http://www.quickfixj.org/support/


Hi,

IMHO there is no other way to either increase your heap mem (how much heap do you have configured
now?) or to change the behaviour in the code. You could also create a pull request with your
solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
the code base.

Cheers,
Chris.


On 02/05/17 12:23, Vipin Chaudhary wrote:
> Hi Team,
>
> I am facing memory issue with QuickfixJ.
> When we send too many messages (when we send backlog data) to clients our Acceptor application
> takes very high memory and GC is unable to clear memory.
>
> On further analysis I found that when we send message to a session(session.send(message)) messages
> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
> messages. Now in case we produce mesages too fast then this queue size increase.
>
> This lead to high memory uses, which lead to out of memory.
>
> I am thinking to block the send call when queue size is more than a threshold, for this I need to
> update the IoSessionResponder.java so that I can access this MinaQueue Size.
> This will need to rebuild the QuickFixj
>
> Is there any other way to better handling this scenerio.
>
> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
> disconnect the consumer. I don't want disconnection but want that this call should block untill
> WriteMessageQueue size reduced below a particular threshold.
>
> Thanks
> Vipin

--
Christoph John
Development & Support
Direct: <a href="tel:%2B49%20241%20557080-28" value="+4924155708028" target="_blank">+49 241 557080-28
Mailto:[hidden email]



http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: <a href="tel:%2B49%20241%20557080-0" value="+492415570800" target="_blank">+49 241 557080-0 | Fax: <a href="tel:%2B49%20241%20557080-10" value="+4924155708010" target="_blank">+49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Guido Medina
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



Forgot to mention that for best result of G1GC make your heap size static, that is to say Xms = Xmx = whatever value you need, in my example I used 1g but I'm sure you will be using more than that.

On Wed, May 3, 2017 at 9:19 AM, Guido Medina <[hidden email]> wrote:
If you are using Oracle JVM 8, say JDK 8u131 your best GC will be G1GC, example GC parameters that we use:
-Xms1g -Xmx1g -server -XX:+UseG1GC -XX:+PerfDisableSharedMem -XX:+ParallelRefProcEnabled

Read the following article about PerfDisableShareMem http://www.evanjones.ca/jvm-mmap-pause.html

HTH,

Guido.

On Wed, May 3, 2017 at 9:15 AM, Guido Medina <[hidden email]> wrote:
Another solution for things like streaming prices is to set tick times, some clients have good connections and others bad, we do accumulate prices and send them every X millis,
and we send only if they have changed, checking all that in memory; assuming each instruments has an integer ID; is very fast check on a hash map.

Also we have a split of instruments / ticks where only instrument ID mod (%) ticks are sent at one specific milli, now you need to make sure your kernel scheduler is at it,
that is to say that if you are using Linux you a 4.x kernel and make sure your GC pauses are small or don't use a "stop the world gc"

Example tick timers for 5 ticks:

- Tick 1 will send instrument IDs 1, 6, 11, 16, ...where (ID - 1) % 5 = 0 <- this timer will run at millis 0, 5, 10, etc
- Tick 2 will send instrument IDs 2, 7, 12, 17, ...where (ID - 1) % 5 = 1 <- this timer will run at millis 1, 6, 11, etc
- Tick 3 will send instrument IDs 3, 8, 13, 18, ...where (ID - 1) % 5 = 2 <- this timer will run at millis 2, 7, 12, etc
- Tick 4 will send instrument IDs 4, 9, 14, 19, ...where (ID - 1) % 5 = 3 <- this timer will run at millis 3, 8, 13, etc
- Tick 5 will send instrument IDs 5, 10, 15, 20, ...where (ID - 1) % 5 = 4 <- this timer will run at millis 4, 9, 14, etc

Another way would be to use a non-blocking bounded intermediary queue, see JC tools implementations, that way when your intermediary queue is full you drop messages silently,
maybe such implementation can be added as a plugin to QFJ, I will take a look and see what can be done, either way go JIRA and create the ticket so that we can follow up with ideas.

Every client wants you to send every update all the time, unfortunately most clients can't even handle such "in their imagination" requirement,
accumulating and sending is usually the way to go for thins like this, by doing this you will be doing a favor to both parties and have a healthy system.

HTH,

Guido.

On Wed, May 3, 2017 at 8:05 AM, Christoph John <[hidden email]> wrote:
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J
Support: http://www.quickfixj.org/support/


Hi,

IMHO there is no other way to either increase your heap mem (how much heap do you have configured
now?) or to change the behaviour in the code. You could also create a pull request with your
solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
the code base.

Cheers,
Chris.


On 02/05/17 12:23, Vipin Chaudhary wrote:
> Hi Team,
>
> I am facing memory issue with QuickfixJ.
> When we send too many messages (when we send backlog data) to clients our Acceptor application
> takes very high memory and GC is unable to clear memory.
>
> On further analysis I found that when we send message to a session(session.send(message)) messages
> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
> messages. Now in case we produce mesages too fast then this queue size increase.
>
> This lead to high memory uses, which lead to out of memory.
>
> I am thinking to block the send call when queue size is more than a threshold, for this I need to
> update the IoSessionResponder.java so that I can access this MinaQueue Size.
> This will need to rebuild the QuickFixj
>
> Is there any other way to better handling this scenerio.
>
> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
> disconnect the consumer. I don't want disconnection but want that this call should block untill
> WriteMessageQueue size reduced below a particular threshold.
>
> Thanks
> Vipin

--
Christoph John
Development & Support
Direct: <a href="tel:%2B49%20241%20557080-28" value="+4924155708028" target="_blank">+49 241 557080-28
Mailto:[hidden email]



http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: <a href="tel:%2B49%20241%20557080-0" value="+492415570800" target="_blank">+49 241 557080-0 | Fax: <a href="tel:%2B49%20241%20557080-10" value="+4924155708010" target="_blank">+49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users




------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Robert Engels-2
In reply to this post by Christoph John
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/


The block solution would be fine as long as you have very few clients that would hit this condition and use a pool of sending threads otherwise you will need to implement a more sophisticated event driven system.

> On May 3, 2017, at 2:05 AM, Christoph John <[hidden email]> wrote:
>
> QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
> QuickFIX/J Support: http://www.quickfixj.org/support/
>
>
> Hi,
>
> IMHO there is no other way to either increase your heap mem (how much heap do you have configured
> now?) or to change the behaviour in the code. You could also create a pull request with your
> solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
> the code base.
>
> Cheers,
> Chris.
>
>
>> On 02/05/17 12:23, Vipin Chaudhary wrote:
>> Hi Team,
>>
>> I am facing memory issue with QuickfixJ.
>> When we send too many messages (when we send backlog data) to clients our Acceptor application
>> takes very high memory and GC is unable to clear memory.
>>
>> On further analysis I found that when we send message to a session(session.send(message)) messages
>> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
>> messages. Now in case we produce mesages too fast then this queue size increase.
>>
>> This lead to high memory uses, which lead to out of memory.
>>
>> I am thinking to block the send call when queue size is more than a threshold, for this I need to
>> update the IoSessionResponder.java so that I can access this MinaQueue Size.
>> This will need to rebuild the QuickFixj
>>
>> Is there any other way to better handling this scenerio.
>>
>> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
>> disconnect the consumer. I don't want disconnection but want that this call should block untill
>> WriteMessageQueue size reduced below a particular threshold.
>>
>> Thanks
>> Vipin
>
> --
> Christoph John
> Development & Support
> Direct: +49 241 557080-28
> Mailto:[hidden email]
>    
>
>
> http://www.macd.com <http://www.macd.com/>
> ----------------------------------------------------------------------------------------------------
>    
> ----------------------------------------------------------------------------------------------------
> MACD GmbH
> Oppenhoffallee 103
> D-52066 Aachen
> Tel: +49 241 557080-0 | Fax: +49 241 557080-10
>     Amtsgericht Aachen: HRB 8151
> Ust.-Id: DE 813021663
>
> Geschäftsführer: George Macdonald
> ----------------------------------------------------------------------------------------------------
>    
> ----------------------------------------------------------------------------------------------------
>
> take care of the environment - print only if necessary
>
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> Quickfixj-users mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Robert Engels-2
In reply to this post by Guido Medina
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



FIX (tcp) market data is not suitable for low latency. You need to use multicast. Sending market data updates on millisecond intervals is way to slow. Even your tick solution would quickly fall apart for large option complexes. 

On May 3, 2017, at 3:15 AM, Guido Medina <[hidden email]> wrote:

Another solution for things like streaming prices is to set tick times, some clients have good connections and others bad, we do accumulate prices and send them every X millis,
and we send only if they have changed, checking all that in memory; assuming each instruments has an integer ID; is very fast check on a hash map.

Also we have a split of instruments / ticks where only instrument ID mod (%) ticks are sent at one specific milli, now you need to make sure your kernel scheduler is at it,
that is to say that if you are using Linux you a 4.x kernel and make sure your GC pauses are small or don't use a "stop the world gc"

Example tick timers for 5 ticks:

- Tick 1 will send instrument IDs 1, 6, 11, 16, ...where (ID - 1) % 5 = 0 <- this timer will run at millis 0, 5, 10, etc
- Tick 2 will send instrument IDs 2, 7, 12, 17, ...where (ID - 1) % 5 = 1 <- this timer will run at millis 1, 6, 11, etc
- Tick 3 will send instrument IDs 3, 8, 13, 18, ...where (ID - 1) % 5 = 2 <- this timer will run at millis 2, 7, 12, etc
- Tick 4 will send instrument IDs 4, 9, 14, 19, ...where (ID - 1) % 5 = 3 <- this timer will run at millis 3, 8, 13, etc
- Tick 5 will send instrument IDs 5, 10, 15, 20, ...where (ID - 1) % 5 = 4 <- this timer will run at millis 4, 9, 14, etc

Another way would be to use a non-blocking bounded intermediary queue, see JC tools implementations, that way when your intermediary queue is full you drop messages silently,
maybe such implementation can be added as a plugin to QFJ, I will take a look and see what can be done, either way go JIRA and create the ticket so that we can follow up with ideas.

Every client wants you to send every update all the time, unfortunately most clients can't even handle such "in their imagination" requirement,
accumulating and sending is usually the way to go for thins like this, by doing this you will be doing a favor to both parties and have a healthy system.

HTH,

Guido.

On Wed, May 3, 2017 at 8:05 AM, Christoph John <[hidden email]> wrote:
QuickFIX/J Documentation: <a href="http://www.quickfixj.org/documentation/ QuickFIX/J" rel="noreferrer" target="_blank">http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/


Hi,

IMHO there is no other way to either increase your heap mem (how much heap do you have configured
now?) or to change the behaviour in the code. You could also create a pull request with your
solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
the code base.

Cheers,
Chris.


On 02/05/17 12:23, Vipin Chaudhary wrote:
> Hi Team,
>
> I am facing memory issue with QuickfixJ.
> When we send too many messages (when we send backlog data) to clients our Acceptor application
> takes very high memory and GC is unable to clear memory.
>
> On further analysis I found that when we send message to a session(session.send(message)) messages
> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
> messages. Now in case we produce mesages too fast then this queue size increase.
>
> This lead to high memory uses, which lead to out of memory.
>
> I am thinking to block the send call when queue size is more than a threshold, for this I need to
> update the IoSessionResponder.java so that I can access this MinaQueue Size.
> This will need to rebuild the QuickFixj
>
> Is there any other way to better handling this scenerio.
>
> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
> disconnect the consumer. I don't want disconnection but want that this call should block untill
> WriteMessageQueue size reduced below a particular threshold.
>
> Thanks
> Vipin

--
Christoph John
Development & Support
Direct: <a href="tel:%2B49%20241%20557080-28" value="+4924155708028">+49 241 557080-28
Mailto:[hidden email]



http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: <a href="tel:%2B49%20241%20557080-0" value="+492415570800">+49 241 557080-0 | Fax: <a href="tel:%2B49%20241%20557080-10" value="+4924155708010">+49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Guido Medina
In reply to this post by Robert Engels-2
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



Well, you have the MINA workers doing their thing, I just don't overwhelm them by sending everything.
In my case the most recent system I build each initiator and acceptor connection is an Akka actor distributed among few physical nodes (shard initiators and load balanced acceptor),

And these actors a thread pool.

HTH,

Guido.

On Wed, May 3, 2017 at 12:53 PM, Robert Engels <[hidden email]> wrote:
QuickFIX/J Documentation: <a href="http://www.quickfixj.org/documentation/ QuickFIX/J" rel="noreferrer" target="_blank">http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/


The block solution would be fine as long as you have very few clients that would hit this condition and use a pool of sending threads otherwise you will need to implement a more sophisticated event driven system.

> On May 3, 2017, at 2:05 AM, Christoph John <[hidden email]> wrote:
>
> QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
> QuickFIX/J Support: http://www.quickfixj.org/support/
>
>
> Hi,
>
> IMHO there is no other way to either increase your heap mem (how much heap do you have configured
> now?) or to change the behaviour in the code. You could also create a pull request with your
> solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
> the code base.
>
> Cheers,
> Chris.
>
>
>> On 02/05/17 12:23, Vipin Chaudhary wrote:
>> Hi Team,
>>
>> I am facing memory issue with QuickfixJ.
>> When we send too many messages (when we send backlog data) to clients our Acceptor application
>> takes very high memory and GC is unable to clear memory.
>>
>> On further analysis I found that when we send message to a session(session.send(message)) messages
>> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
>> messages. Now in case we produce mesages too fast then this queue size increase.
>>
>> This lead to high memory uses, which lead to out of memory.
>>
>> I am thinking to block the send call when queue size is more than a threshold, for this I need to
>> update the IoSessionResponder.java so that I can access this MinaQueue Size.
>> This will need to rebuild the QuickFixj
>>
>> Is there any other way to better handling this scenerio.
>>
>> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
>> disconnect the consumer. I don't want disconnection but want that this call should block untill
>> WriteMessageQueue size reduced below a particular threshold.
>>
>> Thanks
>> Vipin
>
> --
> Christoph John
> Development & Support
> Direct: <a href="tel:%2B49%20241%20557080-28" value="+4924155708028">+49 241 557080-28
> Mailto:[hidden email]
>
>
>
> http://www.macd.com <http://www.macd.com/>
> ----------------------------------------------------------------------------------------------------
>
> ----------------------------------------------------------------------------------------------------
> MACD GmbH
> Oppenhoffallee 103
> D-52066 Aachen
> Tel: <a href="tel:%2B49%20241%20557080-0" value="+492415570800">+49 241 557080-0 | Fax: <a href="tel:%2B49%20241%20557080-10" value="+4924155708010">+49 241 557080-10
>     Amtsgericht Aachen: HRB 8151
> Ust.-Id: DE 813021663
>
> Geschäftsführer: George Macdonald
> ----------------------------------------------------------------------------------------------------
>
> ----------------------------------------------------------------------------------------------------
>
> take care of the environment - print only if necessary
>
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> Quickfixj-users mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Guido Medina
In reply to this post by Robert Engels-2
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



Most so called low latency solutions fall apart anyway when you add the network I/O at least in fix TCP,
most trading connectors are via FIX TCP so it doesn't matter you keep prices updated at a nano second level,

All theories will fall apart when it has to reach the network.
As for our internal cluster, we use Aeron https://github.com/real-logic/Aeron

Our target is 5ms top and we are making it, most sellers of so called I have the support low latency solution will fall apart when you run it with a stop the world GC anyway,
is all labels and marketing.

Guido.

On Wed, May 3, 2017 at 12:57 PM, Robert Engels <[hidden email]> wrote:
QuickFIX/J Documentation: <a href="http://www.quickfixj.org/documentation/ QuickFIX/J" rel="noreferrer" target="_blank">http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



FIX (tcp) market data is not suitable for low latency. You need to use multicast. Sending market data updates on millisecond intervals is way to slow. Even your tick solution would quickly fall apart for large option complexes. 

On May 3, 2017, at 3:15 AM, Guido Medina <[hidden email]> wrote:

Another solution for things like streaming prices is to set tick times, some clients have good connections and others bad, we do accumulate prices and send them every X millis,
and we send only if they have changed, checking all that in memory; assuming each instruments has an integer ID; is very fast check on a hash map.

Also we have a split of instruments / ticks where only instrument ID mod (%) ticks are sent at one specific milli, now you need to make sure your kernel scheduler is at it,
that is to say that if you are using Linux you a 4.x kernel and make sure your GC pauses are small or don't use a "stop the world gc"

Example tick timers for 5 ticks:

- Tick 1 will send instrument IDs 1, 6, 11, 16, ...where (ID - 1) % 5 = 0 <- this timer will run at millis 0, 5, 10, etc
- Tick 2 will send instrument IDs 2, 7, 12, 17, ...where (ID - 1) % 5 = 1 <- this timer will run at millis 1, 6, 11, etc
- Tick 3 will send instrument IDs 3, 8, 13, 18, ...where (ID - 1) % 5 = 2 <- this timer will run at millis 2, 7, 12, etc
- Tick 4 will send instrument IDs 4, 9, 14, 19, ...where (ID - 1) % 5 = 3 <- this timer will run at millis 3, 8, 13, etc
- Tick 5 will send instrument IDs 5, 10, 15, 20, ...where (ID - 1) % 5 = 4 <- this timer will run at millis 4, 9, 14, etc

Another way would be to use a non-blocking bounded intermediary queue, see JC tools implementations, that way when your intermediary queue is full you drop messages silently,
maybe such implementation can be added as a plugin to QFJ, I will take a look and see what can be done, either way go JIRA and create the ticket so that we can follow up with ideas.

Every client wants you to send every update all the time, unfortunately most clients can't even handle such "in their imagination" requirement,
accumulating and sending is usually the way to go for thins like this, by doing this you will be doing a favor to both parties and have a healthy system.

HTH,

Guido.

On Wed, May 3, 2017 at 8:05 AM, Christoph John <[hidden email]> wrote:
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J
Support: http://www.quickfixj.org/support/


Hi,

IMHO there is no other way to either increase your heap mem (how much heap do you have configured
now?) or to change the behaviour in the code. You could also create a pull request with your
solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
the code base.

Cheers,
Chris.


On 02/05/17 12:23, Vipin Chaudhary wrote:
> Hi Team,
>
> I am facing memory issue with QuickfixJ.
> When we send too many messages (when we send backlog data) to clients our Acceptor application
> takes very high memory and GC is unable to clear memory.
>
> On further analysis I found that when we send message to a session(session.send(message)) messages
> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
> messages. Now in case we produce mesages too fast then this queue size increase.
>
> This lead to high memory uses, which lead to out of memory.
>
> I am thinking to block the send call when queue size is more than a threshold, for this I need to
> update the IoSessionResponder.java so that I can access this MinaQueue Size.
> This will need to rebuild the QuickFixj
>
> Is there any other way to better handling this scenerio.
>
> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
> disconnect the consumer. I don't want disconnection but want that this call should block untill
> WriteMessageQueue size reduced below a particular threshold.
>
> Thanks
> Vipin

--
Christoph John
Development & Support
Direct: <a href="tel:%2B49%20241%20557080-28" value="+4924155708028" target="_blank">+49 241 557080-28
Mailto:[hidden email]



http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: <a href="tel:%2B49%20241%20557080-0" value="+492415570800" target="_blank">+49 241 557080-0 | Fax: <a href="tel:%2B49%20241%20557080-10" value="+4924155708010" target="_blank">+49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Robert Engels-2
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



I think you are making a mistake in your sizing. A single tcp fix connection might submit a single order that generates market data that needs to go to thousands of users. Thus the need for multicast. 

Tcp fix market data is fine for systems with very few connected clients, or slow ones where latency is not an issue. 

Also, there are technologies like Zing or non Java systems that don't have stop the world GC. Our system performs most work in the sub 50 microsecond range with market data in the sub 15 microsecond range with max delays in the sub 500 microsecond range. 

On May 3, 2017, at 7:36 AM, Guido Medina <[hidden email]> wrote:

Most so called low latency solutions fall apart anyway when you add the network I/O at least in fix TCP,
most trading connectors are via FIX TCP so it doesn't matter you keep prices updated at a nano second level,

All theories will fall apart when it has to reach the network.
As for our internal cluster, we use Aeron https://github.com/real-logic/Aeron

Our target is 5ms top and we are making it, most sellers of so called I have the support low latency solution will fall apart when you run it with a stop the world GC anyway,
is all labels and marketing.

Guido.

On Wed, May 3, 2017 at 12:57 PM, Robert Engels <[hidden email]> wrote:
QuickFIX/J Documentation: <a href="http://www.quickfixj.org/documentation/ QuickFIX/J" rel="noreferrer" target="_blank">http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



FIX (tcp) market data is not suitable for low latency. You need to use multicast. Sending market data updates on millisecond intervals is way to slow. Even your tick solution would quickly fall apart for large option complexes. 

On May 3, 2017, at 3:15 AM, Guido Medina <[hidden email]> wrote:

Another solution for things like streaming prices is to set tick times, some clients have good connections and others bad, we do accumulate prices and send them every X millis,
and we send only if they have changed, checking all that in memory; assuming each instruments has an integer ID; is very fast check on a hash map.

Also we have a split of instruments / ticks where only instrument ID mod (%) ticks are sent at one specific milli, now you need to make sure your kernel scheduler is at it,
that is to say that if you are using Linux you a 4.x kernel and make sure your GC pauses are small or don't use a "stop the world gc"

Example tick timers for 5 ticks:

- Tick 1 will send instrument IDs 1, 6, 11, 16, ...where (ID - 1) % 5 = 0 <- this timer will run at millis 0, 5, 10, etc
- Tick 2 will send instrument IDs 2, 7, 12, 17, ...where (ID - 1) % 5 = 1 <- this timer will run at millis 1, 6, 11, etc
- Tick 3 will send instrument IDs 3, 8, 13, 18, ...where (ID - 1) % 5 = 2 <- this timer will run at millis 2, 7, 12, etc
- Tick 4 will send instrument IDs 4, 9, 14, 19, ...where (ID - 1) % 5 = 3 <- this timer will run at millis 3, 8, 13, etc
- Tick 5 will send instrument IDs 5, 10, 15, 20, ...where (ID - 1) % 5 = 4 <- this timer will run at millis 4, 9, 14, etc

Another way would be to use a non-blocking bounded intermediary queue, see JC tools implementations, that way when your intermediary queue is full you drop messages silently,
maybe such implementation can be added as a plugin to QFJ, I will take a look and see what can be done, either way go JIRA and create the ticket so that we can follow up with ideas.

Every client wants you to send every update all the time, unfortunately most clients can't even handle such "in their imagination" requirement,
accumulating and sending is usually the way to go for thins like this, by doing this you will be doing a favor to both parties and have a healthy system.

HTH,

Guido.

On Wed, May 3, 2017 at 8:05 AM, Christoph John <[hidden email]> wrote:
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J
Support: http://www.quickfixj.org/support/


Hi,

IMHO there is no other way to either increase your heap mem (how much heap do you have configured
now?) or to change the behaviour in the code. You could also create a pull request with your
solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
the code base.

Cheers,
Chris.


On 02/05/17 12:23, Vipin Chaudhary wrote:
> Hi Team,
>
> I am facing memory issue with QuickfixJ.
> When we send too many messages (when we send backlog data) to clients our Acceptor application
> takes very high memory and GC is unable to clear memory.
>
> On further analysis I found that when we send message to a session(session.send(message)) messages
> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
> messages. Now in case we produce mesages too fast then this queue size increase.
>
> This lead to high memory uses, which lead to out of memory.
>
> I am thinking to block the send call when queue size is more than a threshold, for this I need to
> update the IoSessionResponder.java so that I can access this MinaQueue Size.
> This will need to rebuild the QuickFixj
>
> Is there any other way to better handling this scenerio.
>
> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
> disconnect the consumer. I don't want disconnection but want that this call should block untill
> WriteMessageQueue size reduced below a particular threshold.
>
> Thanks
> Vipin

--
Christoph John
Development & Support
Direct: <a href="tel:%2B49%20241%20557080-28" value="+4924155708028" target="_blank">+49 241 557080-28
Mailto:[hidden email]



http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: <a href="tel:%2B49%20241%20557080-0" value="+492415570800" target="_blank">+49 241 557080-0 | Fax: <a href="tel:%2B49%20241%20557080-10" value="+4924155708010" target="_blank">+49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Guido Medina
In reply to this post by Robert Engels-2
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



Sending (at least via TCP) every single update is not viable, tick solution doesn't just fall apart,
each tick processes a set of instruments, not just one instrument, the data to be sent is calculated in real time and put in hash map.

The amount of generated garbage that you would generate by calculating in real time, say 3 price sources that will send a market depth of 5 to a target,
if you do that calculation all the time and then send you would be wasting much CPU, that will never scale.

Instead each source updates its cache and per tick we pick the top N best prices and send them.
I don't see how that would fall apart in a cluster with shards of initiators and in-memory caches per node.

Guido.

On Wed, May 3, 2017 at 12:57 PM, Robert Engels <[hidden email]> wrote:
QuickFIX/J Documentation: <a href="http://www.quickfixj.org/documentation/ QuickFIX/J" rel="noreferrer" target="_blank">http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



FIX (tcp) market data is not suitable for low latency. You need to use multicast. Sending market data updates on millisecond intervals is way to slow. Even your tick solution would quickly fall apart for large option complexes. 

On May 3, 2017, at 3:15 AM, Guido Medina <[hidden email]> wrote:

Another solution for things like streaming prices is to set tick times, some clients have good connections and others bad, we do accumulate prices and send them every X millis,
and we send only if they have changed, checking all that in memory; assuming each instruments has an integer ID; is very fast check on a hash map.

Also we have a split of instruments / ticks where only instrument ID mod (%) ticks are sent at one specific milli, now you need to make sure your kernel scheduler is at it,
that is to say that if you are using Linux you a 4.x kernel and make sure your GC pauses are small or don't use a "stop the world gc"

Example tick timers for 5 ticks:

- Tick 1 will send instrument IDs 1, 6, 11, 16, ...where (ID - 1) % 5 = 0 <- this timer will run at millis 0, 5, 10, etc
- Tick 2 will send instrument IDs 2, 7, 12, 17, ...where (ID - 1) % 5 = 1 <- this timer will run at millis 1, 6, 11, etc
- Tick 3 will send instrument IDs 3, 8, 13, 18, ...where (ID - 1) % 5 = 2 <- this timer will run at millis 2, 7, 12, etc
- Tick 4 will send instrument IDs 4, 9, 14, 19, ...where (ID - 1) % 5 = 3 <- this timer will run at millis 3, 8, 13, etc
- Tick 5 will send instrument IDs 5, 10, 15, 20, ...where (ID - 1) % 5 = 4 <- this timer will run at millis 4, 9, 14, etc

Another way would be to use a non-blocking bounded intermediary queue, see JC tools implementations, that way when your intermediary queue is full you drop messages silently,
maybe such implementation can be added as a plugin to QFJ, I will take a look and see what can be done, either way go JIRA and create the ticket so that we can follow up with ideas.

Every client wants you to send every update all the time, unfortunately most clients can't even handle such "in their imagination" requirement,
accumulating and sending is usually the way to go for thins like this, by doing this you will be doing a favor to both parties and have a healthy system.

HTH,

Guido.

On Wed, May 3, 2017 at 8:05 AM, Christoph John <[hidden email]> wrote:
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J
Support: http://www.quickfixj.org/support/


Hi,

IMHO there is no other way to either increase your heap mem (how much heap do you have configured
now?) or to change the behaviour in the code. You could also create a pull request with your
solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
the code base.

Cheers,
Chris.


On 02/05/17 12:23, Vipin Chaudhary wrote:
> Hi Team,
>
> I am facing memory issue with QuickfixJ.
> When we send too many messages (when we send backlog data) to clients our Acceptor application
> takes very high memory and GC is unable to clear memory.
>
> On further analysis I found that when we send message to a session(session.send(message)) messages
> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
> messages. Now in case we produce mesages too fast then this queue size increase.
>
> This lead to high memory uses, which lead to out of memory.
>
> I am thinking to block the send call when queue size is more than a threshold, for this I need to
> update the IoSessionResponder.java so that I can access this MinaQueue Size.
> This will need to rebuild the QuickFixj
>
> Is there any other way to better handling this scenerio.
>
> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
> disconnect the consumer. I don't want disconnection but want that this call should block untill
> WriteMessageQueue size reduced below a particular threshold.
>
> Thanks
> Vipin

--
Christoph John
Development & Support
Direct: <a href="tel:%2B49%20241%20557080-28" value="+4924155708028" target="_blank">+49 241 557080-28
Mailto:[hidden email]



http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: <a href="tel:%2B49%20241%20557080-0" value="+492415570800" target="_blank">+49 241 557080-0 | Fax: <a href="tel:%2B49%20241%20557080-10" value="+4924155708010" target="_blank">+49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Vipin Chaudhary
In reply to this post by Guido Medina
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



In Normal Load condition we don't have many messages. But One messages (MassQuote) is very big (1000-5000) prices. Currently I have opted for SynchronousSocketWrite option. Till now It is working fine.

On Wed, May 3, 2017 at 6:06 PM, Guido Medina <[hidden email]> wrote:
QuickFIX/J Documentation: <a href="http://www.quickfixj.org/documentation/ QuickFIX/J" rel="noreferrer" target="_blank">http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



Most so called low latency solutions fall apart anyway when you add the network I/O at least in fix TCP,
most trading connectors are via FIX TCP so it doesn't matter you keep prices updated at a nano second level,

All theories will fall apart when it has to reach the network.
As for our internal cluster, we use Aeron https://github.com/real-logic/Aeron

Our target is 5ms top and we are making it, most sellers of so called I have the support low latency solution will fall apart when you run it with a stop the world GC anyway,
is all labels and marketing.

Guido.

On Wed, May 3, 2017 at 12:57 PM, Robert Engels <[hidden email]> wrote:

FIX (tcp) market data is not suitable for low latency. You need to use multicast. Sending market data updates on millisecond intervals is way to slow. Even your tick solution would quickly fall apart for large option complexes. 

On May 3, 2017, at 3:15 AM, Guido Medina <[hidden email]> wrote:

Another solution for things like streaming prices is to set tick times, some clients have good connections and others bad, we do accumulate prices and send them every X millis,
and we send only if they have changed, checking all that in memory; assuming each instruments has an integer ID; is very fast check on a hash map.

Also we have a split of instruments / ticks where only instrument ID mod (%) ticks are sent at one specific milli, now you need to make sure your kernel scheduler is at it,
that is to say that if you are using Linux you a 4.x kernel and make sure your GC pauses are small or don't use a "stop the world gc"

Example tick timers for 5 ticks:

- Tick 1 will send instrument IDs 1, 6, 11, 16, ...where (ID - 1) % 5 = 0 <- this timer will run at millis 0, 5, 10, etc
- Tick 2 will send instrument IDs 2, 7, 12, 17, ...where (ID - 1) % 5 = 1 <- this timer will run at millis 1, 6, 11, etc
- Tick 3 will send instrument IDs 3, 8, 13, 18, ...where (ID - 1) % 5 = 2 <- this timer will run at millis 2, 7, 12, etc
- Tick 4 will send instrument IDs 4, 9, 14, 19, ...where (ID - 1) % 5 = 3 <- this timer will run at millis 3, 8, 13, etc
- Tick 5 will send instrument IDs 5, 10, 15, 20, ...where (ID - 1) % 5 = 4 <- this timer will run at millis 4, 9, 14, etc

Another way would be to use a non-blocking bounded intermediary queue, see JC tools implementations, that way when your intermediary queue is full you drop messages silently,
maybe such implementation can be added as a plugin to QFJ, I will take a look and see what can be done, either way go JIRA and create the ticket so that we can follow up with ideas.

Every client wants you to send every update all the time, unfortunately most clients can't even handle such "in their imagination" requirement,
accumulating and sending is usually the way to go for thins like this, by doing this you will be doing a favor to both parties and have a healthy system.

HTH,

Guido.

On Wed, May 3, 2017 at 8:05 AM, Christoph John <[hidden email]> wrote:
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J
Support: http://www.quickfixj.org/support/


Hi,

IMHO there is no other way to either increase your heap mem (how much heap do you have configured
now?) or to change the behaviour in the code. You could also create a pull request with your
solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
the code base.

Cheers,
Chris.


On 02/05/17 12:23, Vipin Chaudhary wrote:
> Hi Team,
>
> I am facing memory issue with QuickfixJ.
> When we send too many messages (when we send backlog data) to clients our Acceptor application
> takes very high memory and GC is unable to clear memory.
>
> On further analysis I found that when we send message to a session(session.send(message)) messages
> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
> messages. Now in case we produce mesages too fast then this queue size increase.
>
> This lead to high memory uses, which lead to out of memory.
>
> I am thinking to block the send call when queue size is more than a threshold, for this I need to
> update the IoSessionResponder.java so that I can access this MinaQueue Size.
> This will need to rebuild the QuickFixj
>
> Is there any other way to better handling this scenerio.
>
> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
> disconnect the consumer. I don't want disconnection but want that this call should block untill
> WriteMessageQueue size reduced below a particular threshold.
>
> Thanks
> Vipin

--
Christoph John
Development & Support
Direct: <a href="tel:%2B49%20241%20557080-28" value="+4924155708028" target="_blank">+49 241 557080-28
Mailto:[hidden email]



http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: <a href="tel:%2B49%20241%20557080-0" value="+492415570800" target="_blank">+49 241 557080-0 | Fax: <a href="tel:%2B49%20241%20557080-10" value="+4924155708010" target="_blank">+49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Guido Medina
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



Hmmm, why so many prices on a single message? I would think most providers prefer 1 instrument per subscription? which helps a lot with scaling.

On Wed, May 3, 2017 at 1:50 PM, Vipin Chaudhary <[hidden email]> wrote:
QuickFIX/J Documentation: <a href="http://www.quickfixj.org/documentation/ QuickFIX/J" rel="noreferrer" target="_blank">http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



In Normal Load condition we don't have many messages. But One messages (MassQuote) is very big (1000-5000) prices. Currently I have opted for SynchronousSocketWrite option. Till now It is working fine.

On Wed, May 3, 2017 at 6:06 PM, Guido Medina <[hidden email]> wrote:

Most so called low latency solutions fall apart anyway when you add the network I/O at least in fix TCP,
most trading connectors are via FIX TCP so it doesn't matter you keep prices updated at a nano second level,

All theories will fall apart when it has to reach the network.
As for our internal cluster, we use Aeron https://github.com/real-logic/Aeron

Our target is 5ms top and we are making it, most sellers of so called I have the support low latency solution will fall apart when you run it with a stop the world GC anyway,
is all labels and marketing.

Guido.

On Wed, May 3, 2017 at 12:57 PM, Robert Engels <[hidden email]> wrote:

FIX (tcp) market data is not suitable for low latency. You need to use multicast. Sending market data updates on millisecond intervals is way to slow. Even your tick solution would quickly fall apart for large option complexes. 

On May 3, 2017, at 3:15 AM, Guido Medina <[hidden email]> wrote:

Another solution for things like streaming prices is to set tick times, some clients have good connections and others bad, we do accumulate prices and send them every X millis,
and we send only if they have changed, checking all that in memory; assuming each instruments has an integer ID; is very fast check on a hash map.

Also we have a split of instruments / ticks where only instrument ID mod (%) ticks are sent at one specific milli, now you need to make sure your kernel scheduler is at it,
that is to say that if you are using Linux you a 4.x kernel and make sure your GC pauses are small or don't use a "stop the world gc"

Example tick timers for 5 ticks:

- Tick 1 will send instrument IDs 1, 6, 11, 16, ...where (ID - 1) % 5 = 0 <- this timer will run at millis 0, 5, 10, etc
- Tick 2 will send instrument IDs 2, 7, 12, 17, ...where (ID - 1) % 5 = 1 <- this timer will run at millis 1, 6, 11, etc
- Tick 3 will send instrument IDs 3, 8, 13, 18, ...where (ID - 1) % 5 = 2 <- this timer will run at millis 2, 7, 12, etc
- Tick 4 will send instrument IDs 4, 9, 14, 19, ...where (ID - 1) % 5 = 3 <- this timer will run at millis 3, 8, 13, etc
- Tick 5 will send instrument IDs 5, 10, 15, 20, ...where (ID - 1) % 5 = 4 <- this timer will run at millis 4, 9, 14, etc

Another way would be to use a non-blocking bounded intermediary queue, see JC tools implementations, that way when your intermediary queue is full you drop messages silently,
maybe such implementation can be added as a plugin to QFJ, I will take a look and see what can be done, either way go JIRA and create the ticket so that we can follow up with ideas.

Every client wants you to send every update all the time, unfortunately most clients can't even handle such "in their imagination" requirement,
accumulating and sending is usually the way to go for thins like this, by doing this you will be doing a favor to both parties and have a healthy system.

HTH,

Guido.

On Wed, May 3, 2017 at 8:05 AM, Christoph John <[hidden email]> wrote:
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J
Support: http://www.quickfixj.org/support/


Hi,

IMHO there is no other way to either increase your heap mem (how much heap do you have configured
now?) or to change the behaviour in the code. You could also create a pull request with your
solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
the code base.

Cheers,
Chris.


On 02/05/17 12:23, Vipin Chaudhary wrote:
> Hi Team,
>
> I am facing memory issue with QuickfixJ.
> When we send too many messages (when we send backlog data) to clients our Acceptor application
> takes very high memory and GC is unable to clear memory.
>
> On further analysis I found that when we send message to a session(session.send(message)) messages
> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
> messages. Now in case we produce mesages too fast then this queue size increase.
>
> This lead to high memory uses, which lead to out of memory.
>
> I am thinking to block the send call when queue size is more than a threshold, for this I need to
> update the IoSessionResponder.java so that I can access this MinaQueue Size.
> This will need to rebuild the QuickFixj
>
> Is there any other way to better handling this scenerio.
>
> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
> disconnect the consumer. I don't want disconnection but want that this call should block untill
> WriteMessageQueue size reduced below a particular threshold.
>
> Thanks
> Vipin

--
Christoph John
Development & Support
Direct: <a href="tel:%2B49%20241%20557080-28" value="+4924155708028" target="_blank">+49 241 557080-28
Mailto:[hidden email]



http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: <a href="tel:%2B49%20241%20557080-0" value="+492415570800" target="_blank">+49 241 557080-0 | Fax: <a href="tel:%2B49%20241%20557080-10" value="+4924155708010" target="_blank">+49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Robert Engels-2
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



With options you need to subscribe by entire product or month. The overhead doing a single option per message would be too great. 

Also with volatility quoted options a single underlying price change might generated price changes on thousands of instruments. 

On May 3, 2017, at 7:53 AM, Guido Medina <[hidden email]> wrote:

Hmmm, why so many prices on a single message? I would think most providers prefer 1 instrument per subscription? which helps a lot with scaling.

On Wed, May 3, 2017 at 1:50 PM, Vipin Chaudhary <[hidden email]> wrote:
QuickFIX/J Documentation: <a href="http://www.quickfixj.org/documentation/ QuickFIX/J" rel="noreferrer" target="_blank">http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



In Normal Load condition we don't have many messages. But One messages (MassQuote) is very big (1000-5000) prices. Currently I have opted for SynchronousSocketWrite option. Till now It is working fine.

On Wed, May 3, 2017 at 6:06 PM, Guido Medina <[hidden email]> wrote:

Most so called low latency solutions fall apart anyway when you add the network I/O at least in fix TCP,
most trading connectors are via FIX TCP so it doesn't matter you keep prices updated at a nano second level,

All theories will fall apart when it has to reach the network.
As for our internal cluster, we use Aeron https://github.com/real-logic/Aeron

Our target is 5ms top and we are making it, most sellers of so called I have the support low latency solution will fall apart when you run it with a stop the world GC anyway,
is all labels and marketing.

Guido.

On Wed, May 3, 2017 at 12:57 PM, Robert Engels <[hidden email]> wrote:

FIX (tcp) market data is not suitable for low latency. You need to use multicast. Sending market data updates on millisecond intervals is way to slow. Even your tick solution would quickly fall apart for large option complexes. 

On May 3, 2017, at 3:15 AM, Guido Medina <[hidden email]> wrote:

Another solution for things like streaming prices is to set tick times, some clients have good connections and others bad, we do accumulate prices and send them every X millis,
and we send only if they have changed, checking all that in memory; assuming each instruments has an integer ID; is very fast check on a hash map.

Also we have a split of instruments / ticks where only instrument ID mod (%) ticks are sent at one specific milli, now you need to make sure your kernel scheduler is at it,
that is to say that if you are using Linux you a 4.x kernel and make sure your GC pauses are small or don't use a "stop the world gc"

Example tick timers for 5 ticks:

- Tick 1 will send instrument IDs 1, 6, 11, 16, ...where (ID - 1) % 5 = 0 <- this timer will run at millis 0, 5, 10, etc
- Tick 2 will send instrument IDs 2, 7, 12, 17, ...where (ID - 1) % 5 = 1 <- this timer will run at millis 1, 6, 11, etc
- Tick 3 will send instrument IDs 3, 8, 13, 18, ...where (ID - 1) % 5 = 2 <- this timer will run at millis 2, 7, 12, etc
- Tick 4 will send instrument IDs 4, 9, 14, 19, ...where (ID - 1) % 5 = 3 <- this timer will run at millis 3, 8, 13, etc
- Tick 5 will send instrument IDs 5, 10, 15, 20, ...where (ID - 1) % 5 = 4 <- this timer will run at millis 4, 9, 14, etc

Another way would be to use a non-blocking bounded intermediary queue, see JC tools implementations, that way when your intermediary queue is full you drop messages silently,
maybe such implementation can be added as a plugin to QFJ, I will take a look and see what can be done, either way go JIRA and create the ticket so that we can follow up with ideas.

Every client wants you to send every update all the time, unfortunately most clients can't even handle such "in their imagination" requirement,
accumulating and sending is usually the way to go for thins like this, by doing this you will be doing a favor to both parties and have a healthy system.

HTH,

Guido.

On Wed, May 3, 2017 at 8:05 AM, Christoph John <[hidden email]> wrote:
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J
Support: http://www.quickfixj.org/support/


Hi,

IMHO there is no other way to either increase your heap mem (how much heap do you have configured
now?) or to change the behaviour in the code. You could also create a pull request with your
solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
the code base.

Cheers,
Chris.


On 02/05/17 12:23, Vipin Chaudhary wrote:
> Hi Team,
>
> I am facing memory issue with QuickfixJ.
> When we send too many messages (when we send backlog data) to clients our Acceptor application
> takes very high memory and GC is unable to clear memory.
>
> On further analysis I found that when we send message to a session(session.send(message)) messages
> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
> messages. Now in case we produce mesages too fast then this queue size increase.
>
> This lead to high memory uses, which lead to out of memory.
>
> I am thinking to block the send call when queue size is more than a threshold, for this I need to
> update the IoSessionResponder.java so that I can access this MinaQueue Size.
> This will need to rebuild the QuickFixj
>
> Is there any other way to better handling this scenerio.
>
> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
> disconnect the consumer. I don't want disconnection but want that this call should block untill
> WriteMessageQueue size reduced below a particular threshold.
>
> Thanks
> Vipin

--
Christoph John
Development & Support
Direct: <a href="tel:%2B49%20241%20557080-28" value="+4924155708028" target="_blank">+49 241 557080-28
Mailto:[hidden email]



http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: <a href="tel:%2B49%20241%20557080-0" value="+492415570800" target="_blank">+49 241 557080-0 | Fax: <a href="tel:%2B49%20241%20557080-10" value="+4924155708010" target="_blank">+49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Guido Medina
In reply to this post by Robert Engels-2
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



I know about Zing and doing some tests with it at the moment but they are fixing some Scala 2.12 issue as it is not working well with it,
but not everybody can afford it, besides that even if you multicast, the "magic" of iterating through each connector and sending the message is done somewhere.
the fact that you don't do it programmatically doesn't mean you can avoid "for each connection send this message" which is probably optimized into multiple threads/workers taking groups of connections.

You know very well you don't need many threads to make the thousands go fast but good algorithms and good asynchronous design,
our application (different needs, different market) has thread pools not going over 8 to 16 threads depending if such thread pool will handle I/O or in-memory computations.

RE Options: Didn't know it was a requirement per subscription.

Guido.

On Wed, May 3, 2017 at 1:48 PM, Robert Engels <[hidden email]> wrote:
QuickFIX/J Documentation: <a href="http://www.quickfixj.org/documentation/ QuickFIX/J" rel="noreferrer" target="_blank">http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



I think you are making a mistake in your sizing. A single tcp fix connection might submit a single order that generates market data that needs to go to thousands of users. Thus the need for multicast. 

Tcp fix market data is fine for systems with very few connected clients, or slow ones where latency is not an issue. 

Also, there are technologies like Zing or non Java systems that don't have stop the world GC. Our system performs most work in the sub 50 microsecond range with market data in the sub 15 microsecond range with max delays in the sub 500 microsecond range. 

On May 3, 2017, at 7:36 AM, Guido Medina <[hidden email]> wrote:

Most so called low latency solutions fall apart anyway when you add the network I/O at least in fix TCP,
most trading connectors are via FIX TCP so it doesn't matter you keep prices updated at a nano second level,

All theories will fall apart when it has to reach the network.
As for our internal cluster, we use Aeron https://github.com/real-logic/Aeron

Our target is 5ms top and we are making it, most sellers of so called I have the support low latency solution will fall apart when you run it with a stop the world GC anyway,
is all labels and marketing.

Guido.

On Wed, May 3, 2017 at 12:57 PM, Robert Engels <[hidden email]> wrote:

FIX (tcp) market data is not suitable for low latency. You need to use multicast. Sending market data updates on millisecond intervals is way to slow. Even your tick solution would quickly fall apart for large option complexes. 

On May 3, 2017, at 3:15 AM, Guido Medina <[hidden email]> wrote:

Another solution for things like streaming prices is to set tick times, some clients have good connections and others bad, we do accumulate prices and send them every X millis,
and we send only if they have changed, checking all that in memory; assuming each instruments has an integer ID; is very fast check on a hash map.

Also we have a split of instruments / ticks where only instrument ID mod (%) ticks are sent at one specific milli, now you need to make sure your kernel scheduler is at it,
that is to say that if you are using Linux you a 4.x kernel and make sure your GC pauses are small or don't use a "stop the world gc"

Example tick timers for 5 ticks:

- Tick 1 will send instrument IDs 1, 6, 11, 16, ...where (ID - 1) % 5 = 0 <- this timer will run at millis 0, 5, 10, etc
- Tick 2 will send instrument IDs 2, 7, 12, 17, ...where (ID - 1) % 5 = 1 <- this timer will run at millis 1, 6, 11, etc
- Tick 3 will send instrument IDs 3, 8, 13, 18, ...where (ID - 1) % 5 = 2 <- this timer will run at millis 2, 7, 12, etc
- Tick 4 will send instrument IDs 4, 9, 14, 19, ...where (ID - 1) % 5 = 3 <- this timer will run at millis 3, 8, 13, etc
- Tick 5 will send instrument IDs 5, 10, 15, 20, ...where (ID - 1) % 5 = 4 <- this timer will run at millis 4, 9, 14, etc

Another way would be to use a non-blocking bounded intermediary queue, see JC tools implementations, that way when your intermediary queue is full you drop messages silently,
maybe such implementation can be added as a plugin to QFJ, I will take a look and see what can be done, either way go JIRA and create the ticket so that we can follow up with ideas.

Every client wants you to send every update all the time, unfortunately most clients can't even handle such "in their imagination" requirement,
accumulating and sending is usually the way to go for thins like this, by doing this you will be doing a favor to both parties and have a healthy system.

HTH,

Guido.

On Wed, May 3, 2017 at 8:05 AM, Christoph John <[hidden email]> wrote:
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J
Support: http://www.quickfixj.org/support/


Hi,

IMHO there is no other way to either increase your heap mem (how much heap do you have configured
now?) or to change the behaviour in the code. You could also create a pull request with your
solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
the code base.

Cheers,
Chris.


On 02/05/17 12:23, Vipin Chaudhary wrote:
> Hi Team,
>
> I am facing memory issue with QuickfixJ.
> When we send too many messages (when we send backlog data) to clients our Acceptor application
> takes very high memory and GC is unable to clear memory.
>
> On further analysis I found that when we send message to a session(session.send(message)) messages
> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
> messages. Now in case we produce mesages too fast then this queue size increase.
>
> This lead to high memory uses, which lead to out of memory.
>
> I am thinking to block the send call when queue size is more than a threshold, for this I need to
> update the IoSessionResponder.java so that I can access this MinaQueue Size.
> This will need to rebuild the QuickFixj
>
> Is there any other way to better handling this scenerio.
>
> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
> disconnect the consumer. I don't want disconnection but want that this call should block untill
> WriteMessageQueue size reduced below a particular threshold.
>
> Thanks
> Vipin

--
Christoph John
Development & Support
Direct: <a href="tel:%2B49%20241%20557080-28" value="+4924155708028" target="_blank">+49 241 557080-28
Mailto:[hidden email]



http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: <a href="tel:%2B49%20241%20557080-0" value="+492415570800" target="_blank">+49 241 557080-0 | Fax: <a href="tel:%2B49%20241%20557080-10" value="+4924155708010" target="_blank">+49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Robert Engels-2
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



That is the beauty of multicast, you don’t need to do the ‘for each connection send this message’. The network layer protocols manage the subscriptions and “ensure” the message gets to the required recipients. Far more efficient.

On May 3, 2017, at 8:38 AM, Guido Medina <[hidden email]> wrote:

QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/


I know about Zing and doing some tests with it at the moment but they are fixing some Scala 2.12 issue as it is not working well with it,
but not everybody can afford it, besides that even if you multicast, the "magic" of iterating through each connector and sending the message is done somewhere.
the fact that you don't do it programmatically doesn't mean you can avoid "for each connection send this message" which is probably optimized into multiple threads/workers taking groups of connections.

You know very well you don't need many threads to make the thousands go fast but good algorithms and good asynchronous design,
our application (different needs, different market) has thread pools not going over 8 to 16 threads depending if such thread pool will handle I/O or in-memory computations.

RE Options: Didn't know it was a requirement per subscription.

Guido.

On Wed, May 3, 2017 at 1:48 PM, Robert Engels <[hidden email]> wrote:

I think you are making a mistake in your sizing. A single tcp fix connection might submit a single order that generates market data that needs to go to thousands of users. Thus the need for multicast. 

Tcp fix market data is fine for systems with very few connected clients, or slow ones where latency is not an issue. 

Also, there are technologies like Zing or non Java systems that don't have stop the world GC. Our system performs most work in the sub 50 microsecond range with market data in the sub 15 microsecond range with max delays in the sub 500 microsecond range. 

On May 3, 2017, at 7:36 AM, Guido Medina <[hidden email]> wrote:

Most so called low latency solutions fall apart anyway when you add the network I/O at least in fix TCP,
most trading connectors are via FIX TCP so it doesn't matter you keep prices updated at a nano second level,

All theories will fall apart when it has to reach the network.
As for our internal cluster, we use Aeron https://github.com/real-logic/Aeron

Our target is 5ms top and we are making it, most sellers of so called I have the support low latency solution will fall apart when you run it with a stop the world GC anyway,
is all labels and marketing.

Guido.

On Wed, May 3, 2017 at 12:57 PM, Robert Engels <[hidden email]> wrote:

FIX (tcp) market data is not suitable for low latency. You need to use multicast. Sending market data updates on millisecond intervals is way to slow. Even your tick solution would quickly fall apart for large option complexes. 

On May 3, 2017, at 3:15 AM, Guido Medina <[hidden email]> wrote:

Another solution for things like streaming prices is to set tick times, some clients have good connections and others bad, we do accumulate prices and send them every X millis,
and we send only if they have changed, checking all that in memory; assuming each instruments has an integer ID; is very fast check on a hash map.

Also we have a split of instruments / ticks where only instrument ID mod (%) ticks are sent at one specific milli, now you need to make sure your kernel scheduler is at it,
that is to say that if you are using Linux you a 4.x kernel and make sure your GC pauses are small or don't use a "stop the world gc"

Example tick timers for 5 ticks:

- Tick 1 will send instrument IDs 1, 6, 11, 16, ...where (ID - 1) % 5 = 0 <- this timer will run at millis 0, 5, 10, etc
- Tick 2 will send instrument IDs 2, 7, 12, 17, ...where (ID - 1) % 5 = 1 <- this timer will run at millis 1, 6, 11, etc
- Tick 3 will send instrument IDs 3, 8, 13, 18, ...where (ID - 1) % 5 = 2 <- this timer will run at millis 2, 7, 12, etc
- Tick 4 will send instrument IDs 4, 9, 14, 19, ...where (ID - 1) % 5 = 3 <- this timer will run at millis 3, 8, 13, etc
- Tick 5 will send instrument IDs 5, 10, 15, 20, ...where (ID - 1) % 5 = 4 <- this timer will run at millis 4, 9, 14, etc

Another way would be to use a non-blocking bounded intermediary queue, see JC tools implementations, that way when your intermediary queue is full you drop messages silently,
maybe such implementation can be added as a plugin to QFJ, I will take a look and see what can be done, either way go JIRA and create the ticket so that we can follow up with ideas.

Every client wants you to send every update all the time, unfortunately most clients can't even handle such "in their imagination" requirement,
accumulating and sending is usually the way to go for thins like this, by doing this you will be doing a favor to both parties and have a healthy system.

HTH,

Guido.

On Wed, May 3, 2017 at 8:05 AM, Christoph John <[hidden email]> wrote:
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J
Support: http://www.quickfixj.org/support/


Hi,

IMHO there is no other way to either increase your heap mem (how much heap do you have configured
now?) or to change the behaviour in the code. You could also create a pull request with your
solution on https://github.com/quickfix-j/quickfixj/pulls to have your modifications put back into
the code base.

Cheers,
Chris.


On 02/05/17 12:23, Vipin Chaudhary wrote:
> Hi Team,
>
> I am facing memory issue with QuickfixJ.
> When we send too many messages (when we send backlog data) to clients our Acceptor application
> takes very high memory and GC is unable to clear memory.
>
> On further analysis I found that when we send message to a session(session.send(message)) messages
> is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of
> messages. Now in case we produce mesages too fast then this queue size increase.
>
> This lead to high memory uses, which lead to out of memory.
>
> I am thinking to block the send call when queue size is more than a threshold, for this I need to
> update the IoSessionResponder.java so that I can access this MinaQueue Size.
> This will need to rebuild the QuickFixj
>
> Is there any other way to better handling this scenerio.
>
> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
> disconnect the consumer. I don't want disconnection but want that this call should block untill
> WriteMessageQueue size reduced below a particular threshold.
>
> Thanks
> Vipin

--
Christoph John
Development & Support
Direct: <a href="tel:%2B49%20241%20557080-28" value="+4924155708028" target="_blank" class="">+49 241 557080-28
Mailto:[hidden email]



http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: <a href="tel:%2B49%20241%20557080-0" value="+492415570800" target="_blank" class="">+49 241 557080-0 | Fax: <a href="tel:%2B49%20241%20557080-10" value="+4924155708010" target="_blank" class="">+49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Colin DuPlantis
In reply to this post by Vipin Chaudhary
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



Vipin,

If you're sending out a burst of more messages than the engine can handle, you will just need more RAM allocated, there's no way around that.

So, there are a couple of strategies you can pursue:

- Reduce the number of messages you're sending out (can you conflate or combine some of them?)
- Increase the speed of the engine with multi-threading, if possible (if the messages are being sent to different sessions instead of all to the same session, use a multi-threaded acceptor)
- Increase the heap size allocated to the JVM and use a no-pause GC implementation (like G1)

I think in a later message you indicate that the messages are related to market data? If so, can you combine data for more than one instrument on the same message?


On 05/02/2017 03:23 AM, Vipin Chaudhary wrote:
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/




Hi Team,

I am facing memory issue with QuickfixJ. 
When we send too many messages (when we send backlog data) to clients our Acceptor application takes very high memory and GC is unable to clear memory.

On further analysis I found that when we send message to a session(session.send(message)) messages is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue of messages. Now in case we produce mesages too fast then this queue size increase.

This lead to high memory uses, which lead to out of memory.

I am thinking to block the send call when queue size is more than a threshold, for this I need to update the IoSessionResponder.java so that I can access this MinaQueue Size.
This will need to rebuild the QuickFixj

Is there any other way to better handling this scenerio.

QuickFix provide MaxScheduledWriteRequests but when this threshold exceed then quickfix disconnect the consumer. I don't want disconnection but want that this call should block untill WriteMessageQueue size reduced below a particular threshold.

Thanks
Vipin


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot


_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users

-- 
Colin DuPlantis
Chief Architect, Marketcetera
Download, Run, Trade
888.868.4884 +1.541.306.6556
http://www.marketcetera.org

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Christoph John
In reply to this post by Vipin Chaudhary
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/


Hi,

you can set the size of the blocking queue via a constructor parameter on the Acceptor/Initiator.
However, this queue is for received messages only.
I did some quick googling but have not found a way to set the write queue size in MINA.

Cheers,
Chris.



On 19/05/17 03:58, Robert Nicholson wrote:

> Does the strategy still ship with an unbounded blocking queue?
>
> Sent from my iPhone
>
> On May 2, 2017, at 5:23 AM, Vipin Chaudhary <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>> QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
>> QuickFIX/J Support: http://www.quickfixj.org/support/
>>
>>
>> Hi Team,
>>
>> I am facing memory issue with QuickfixJ.
>> When we send too many messages (when we send backlog data) to clients our Acceptor application
>> takes very high memory and GC is unable to clear memory.
>>
>> On further analysis I found that when we send message to a session(session.send(message))
>> messages is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue
>> of messages. Now in case we produce mesages too fast then this queue size increase.
>>
>> This lead to high memory uses, which lead to out of memory.
>>
>> I am thinking to block the send call when queue size is more than a threshold, for this I need to
>> update the IoSessionResponder.java so that I can access this MinaQueue Size.
>> This will need to rebuild the QuickFixj
>>
>> Is there any other way to better handling this scenerio.
>>
>> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
>> disconnect the consumer. I don't want disconnection but want that this call should block untill
>> WriteMessageQueue size reduced below a particular threshold.
>>
>> Thanks
>> Vipin

--
Christoph John
Development & Support
Direct: +49 241 557080-28
Mailto:[hidden email]
       


http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------
       
----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: +49 241 557080-0 | Fax: +49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------
       
----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Quickfix taking huge memory when network latency is high

Robert Engels-2
QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/



You can really set a bound on the outbound queue, as the upper layers would need to block - since message generation is independent of consumer delivery which means you would need a complex method of hand-off threads, spin up new threads, etc.

Ideally quickfix would write the message to the filestore, and then as the outbound queue emptied, pull messages from the filestore to send on the session - but it doesn't work that way - but you could probably re-work the outbound flow to do that.... I believe Mina provides the necessary callbacks.

On Mon, May 22, 2017 at 10:07 AM, Christoph John <[hidden email]> wrote:
QuickFIX/J Documentation: <a href="http://www.quickfixj.org/documentation/ QuickFIX/J" rel="noreferrer" target="_blank">http://www.quickfixj.org/documentation/
QuickFIX/J Support: http://www.quickfixj.org/support/


Hi,

you can set the size of the blocking queue via a constructor parameter on the Acceptor/Initiator.
However, this queue is for received messages only.
I did some quick googling but have not found a way to set the write queue size in MINA.

Cheers,
Chris.



On 19/05/17 03:58, Robert Nicholson wrote:
> Does the strategy still ship with an unbounded blocking queue?
>
> Sent from my iPhone
>
> On May 2, 2017, at 5:23 AM, Vipin Chaudhary <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>> QuickFIX/J Documentation: http://www.quickfixj.org/documentation/
>> QuickFIX/J Support: http://www.quickfixj.org/support/
>>
>>
>> Hi Team,
>>
>> I am facing memory issue with QuickfixJ.
>> When we send too many messages (when we send backlog data) to clients our Acceptor application
>> takes very high memory and GC is unable to clear memory.
>>
>> On further analysis I found that when we send message to a session(session.send(message))
>> messages is handled to NioSession (mina library). Apache mina itself maintain a WriteMessageQueue
>> of messages. Now in case we produce mesages too fast then this queue size increase.
>>
>> This lead to high memory uses, which lead to out of memory.
>>
>> I am thinking to block the send call when queue size is more than a threshold, for this I need to
>> update the IoSessionResponder.java so that I can access this MinaQueue Size.
>> This will need to rebuild the QuickFixj
>>
>> Is there any other way to better handling this scenerio.
>>
>> QuickFix provide /MaxScheduledWriteRequests /but when this threshold exceed then quickfix
>> disconnect the consumer. I don't want disconnection but want that this call should block untill
>> WriteMessageQueue size reduced below a particular threshold.
>>
>> Thanks
>> Vipin

--
Christoph John
Development & Support
Direct: <a href="tel:%2B49%20241%20557080-28" value="+4924155708028">+49 241 557080-28
Mailto:[hidden email]



http://www.macd.com <http://www.macd.com/>
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
MACD GmbH
Oppenhoffallee 103
D-52066 Aachen
Tel: <a href="tel:%2B49%20241%20557080-0" value="+492415570800">+49 241 557080-0 | Fax: <a href="tel:%2B49%20241%20557080-10" value="+4924155708010">+49 241 557080-10
         Amtsgericht Aachen: HRB 8151
Ust.-Id: DE 813021663

Geschäftsführer: George Macdonald
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------

take care of the environment - print only if necessary

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users



--

Robert Engels

 

OptionsCity Software
150 S. Wacker Dr., Suite 2300
Chicago, IL 60606

O. +1 (312) 605-4500 | F. +1 (312) 635-1751 

 

Connect with OptionsCity at www.optionscity.com  | LinkedIn  |  Twitter  |  YouTube  |  Facebook

 

 


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Quickfixj-users mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quickfixj-users
Loading...