Important Notice: Our web hosting provider recently started charging us for additional visits, which was unexpected. In response, we're seeking donations. Depending on the situation, we may explore different monetization options for our Community and Expert Contributors. It's crucial to provide more returns for their expertise and offer more Expert Validated Answers or AI Validated Answers. Learn more about our hosting issue here.

How scalable is Ganymede, with its memory-based database?

0
Posted

How scalable is Ganymede, with its memory-based database?

0

Fairly scalable. At ARL:UT, we use Ganymede to manage a rather complex NIS domain containing around 800 users and our DNS domain with over 2500 system records defined. During execution our Ganymede server takes up only about 25 megabytes of JVM heap space, or about 100 megabytes in total counting shared libraries, etc., for the JVM. During development of Ganymede 2.0, we test-loaded the Ganymede server to 250,000 users and 250,000 groups of random data, using the userKit. At those crazy levels, the Ganymede server balloons up to 600 megs of RAM, but it still works fine. So we feel very confident in saying that the Ganymede server should be able to scale up to handle just about as large a set of data as you’d ever want to manage on a single server. The server should degrade gracefully if the heap usage gets to be too much. As the amount of data loaded into the server increases towards the upper limits that can be held in the the JVM’s maximum allocated heap, garbage collection activity

Related Questions

What is your question?

*Sadly, we had to bring back ads too. Hopefully more targeted.

Experts123