Important Notice: Our web hosting provider recently started charging us for additional visits, which was unexpected. In response, we're seeking donations. Depending on the situation, we may explore different monetization options for our Community and Expert Contributors. It's crucial to provide more returns for their expertise and offer more Expert Validated Answers or AI Validated Answers. Learn more about our hosting issue here.

How much RAM do I need to use MySQL Cluster? Is it possible to use disk memory at all?

0
Posted

How much RAM do I need to use MySQL Cluster? Is it possible to use disk memory at all?

0

In MySQL 5.0 and earlier, MySQL Cluster was in-memory only. This meant that all table data (including indexes) was stored in RAM. If your data took up 1 GB of space and you wanted to replicate it once in the cluster, you needed 2 GB of memory to do so (1 GB per replica). This was in addition to the memory required by the operating system and any applications running on the cluster computers. This is still true of in-memory tables. If a data node’s memory usage exceeds what is available in RAM, then the system will attempt to use swap space up to the limit set for DataMemory. However, this will at best result in severely degraded performance, and may cause the node to be dropped due to slow response time (missed heartbeats). We do not recommend on relying on disk swapping in a production environment for this reason. In any case, once the DataMemory limit is reached, any operations requiring additional memory (such as inserts) will fail. NDBCLUSTER in MySQL Cluster NDB 6.x includes suppo

0

Previous to MySQL 5.1 (including MySQL Cluster NDB 6.2 and 6.3), MySQL Cluster was in-memory only. This meant that all table data (including indexes) was stored in RAM. If your data took up 1 GB of space and you wanted to replicate it once in the cluster, you needed 2 GB of memory to do so (1 GB per replica). This was in addition to the memory required by the operating system and any applications running on the cluster computers. This is still true of in-memory tables. If a data node’s memory usage exceeds what is available in RAM, then the system will attempt to use swap space up to the limit set for DataMemory. However, this will at best result in severely degraded performance, and may cause the node to be dropped due to slow response time (missed heartbeats). We do not recommend on relying on disk swapping in a production environment for this reason. In any case, once the DataMemory limit is reached, any operations requiring additional memory (such as inserts) will fail. NDBCLUSTER

Related Questions

What is your question?

*Sadly, we had to bring back ads too. Hopefully more targeted.

Experts123