I had a really interesting scenario recently where I had a tree data structure implemented in left-child right-sibling (LCRS) fashion, where I stored the rank of each node in a separate, corresponding array to the array that held the Linked nodes. I realized I could actually keep the buffer containing the Linked nodes sorted by their rank (and the corresponding list of ranks would be sorted too, of course). This made sense in context because during an update I had to update the relevant pointers (offset into buffer in this case) anyway, so as long as I was expecting ranks to be incremented rather than jump around randomly I could perform the important query without actually having to look at the ranks, just by comparing the position in memory. It was a major win for memory locality! And because of the Linked nodes implementation you can do updates on existing data without any allocations at all. Of course using a non-LCRS, array-based version also has good cache locality in the typical way we expect, however updates require new arrays of different sizes to be allocated, data to be copied, and this pessimizes updates, which in the LCRS version do not need any heap allocations at all. It's a trade-off of course and not applicable in every scenario but I just wanted to throw it out there that there are cases where a Linked list implementation IMPROVES cache locality AND improves update speed.
Another technique that can be used is building a Linked list version of your structure first and then copying the data over to an array-based version, if the array-based version is indeed the better implementation for your structure but you don't want a million allocations and deallocations and copying each data element a dozen times each.
I had a really interesting scenario recently where I had a tree data structure implemented in left-child right-sibling (LCRS) fashion, where I stored the rank of each node in a separate, corresponding array to the array that held the Linked nodes. I realized I could actually keep the buffer containing the Linked nodes sorted by their rank (and the corresponding list of ranks would be sorted too, of course). This made sense in context because during an update I had to update the relevant pointers (offset into buffer in this case) anyway, so as long as I was expecting ranks to be incremented rather than jump around randomly I could perform the important query without actually having to look at the ranks, just by comparing the position in memory. It was a major win for memory locality! And because of the Linked nodes implementation you can do updates on existing data without any allocations at all. Of course using a non-LCRS, array-based version also has good cache locality in the typical way we expect, however updates require new arrays of different sizes to be allocated, data to be copied, and this pessimizes updates, which in the LCRS version do not need any heap allocations at all. It's a trade-off of course and not applicable in every scenario but I just wanted to throw it out there that there are cases where a Linked list implementation IMPROVES cache locality AND improves update speed.
Another technique that can be used is building a Linked list version of your structure first and then copying the data over to an array-based version, if the array-based version is indeed the better implementation for your structure but you don't want a million allocations and deallocations and copying each data element a dozen times each.