Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Compatibility] Added LCS command #843

Open
wants to merge 14 commits into
base: main
Choose a base branch
from

Conversation

Vijay-Nirmal
Copy link
Contributor

Adding the LCS commands to garnet

  • Add LCS commands
  • Add Integration Test cases, ACL Test and Slot Verification Test
  • Add documentation

@TalZaccai TalZaccai self-requested a review December 3, 2024 19:26
@badrishc
Copy link
Contributor

badrishc commented Dec 5, 2024

Nicely done!!

Copy link
Contributor

@TalZaccai TalZaccai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great contribution! This is a gnarly one to implement 👍
A few initial comments... Let me know if you need any further guidance.

libs/server/API/IGarnetApi.cs Outdated Show resolved Hide resolved
}
else if (option.EqualsUpperCaseSpanIgnoringCase(CmdStrings.MINMATCHLEN))
{
if (!parseState.TryGetInt(tokenIdx++, out var minLen) || minLen < 0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't look like there's an issue with setting minLen to a non-positive number, it just doesn't do anything

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be a miss from Redis side. Personally, I don't think we should inherit their flaws (Provided this is a flaw from Redis). Do you want to remove this check?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you 100%, I'm just wondering if this might break any existing clients that rely on this logic for some reason...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

libs/server/Resp/ArrayCommands.cs Outdated Show resolved Hide resolved
libs/server/Storage/Session/MainStore/MainStoreOps.cs Outdated Show resolved Hide resolved
libs/server/Storage/Session/MainStore/MainStoreOps.cs Outdated Show resolved Hide resolved
{
var m = str1.Length;
var n = str2.Length;
var dp = new int[m + 1, n + 1];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you're only using the current and previous row, it's sufficient to hold 2 1D arrays.
In general, it is not a good idea to use multidimensional arrays, see: https://stackoverflow.com/questions/597720/differences-between-a-multidimensional-array-and-an-array-of-arrays

Copy link
Contributor Author

@Vijay-Nirmal Vijay-Nirmal Dec 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried with 2D array first, but when using 2 1D arrays I am getting wrong results compared to Redis because it can't maintain the direction when backtracking. This implementation is inspired from https://www.geeksforgeeks.org/longest-common-subsequence-dp-4/. What I can do is that if we just need length (lenOnly option) then I can use 2D array, for others we need multidimensional arrays. I tried a lot with 2 1D arrays, I am pretty sure we can't use it when we need actual LCS back or its indices.

Also, modern .Net has comparable performance between multidimensional and array of arrays. Source: https://stackoverflow.com/a/63031036/7331395 (Will run it locally and test)

Copy link
Contributor Author

@Vijay-Nirmal Vijay-Nirmal Dec 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Verified the valkey code that also uses multi-dimensional array with regressive calls, I think that might be less performant than our implementation. The space complexity of LCS is O(m * n), unless we come up with a new algorithm to find LCS, I don't think we can have lower space complexity.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Link to the code?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here: They have 1D array with m * n items using regressive calls to populate it

https://github.com/valkey-io/valkey/blob/f20d629dbe31d31eb82e360f9da4ef94ba9aabdc/src/t_string.c#L806-L849

Copy link
Contributor

@TalZaccai TalZaccai Dec 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's perf test this as Badrish suggested, we can compare with valkey for funsies

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested by adding BDN benchmark, [m,n], [m][n], [m*n] doesn't make any noticeable difference

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any update on this?

libs/server/Storage/Session/MainStore/MainStoreOps.cs Outdated Show resolved Hide resolved
@microsoft microsoft deleted a comment from TalZaccai Dec 8, 2024
@Vijay-Nirmal
Copy link
Contributor Author

@TalZaccai Fixed review command with an open question

@badrishc
Copy link
Contributor

badrishc commented Dec 8, 2024

Might be a good idea to perf test these commands, varying #threads:

time memtier_benchmark -s <host> -p<port> -Predis --hide-histogram --command="<insert command here>" -c 1 -t 1

another example:

memtier_benchmark -s 10.3.0.54 -p 12002 -c 50 -t 14 \ --command="hset __key__ f1 v1 f2 v2 f3 v3 f4 v4 f5 v5 f6 v6 f7 v7" \ --hide-histogram --test-time 180 --run-count 3

@Vijay-Nirmal
Copy link
Contributor Author

I don't think I have proper setup to run memtier_benchmark, certainly I don't have proper setup to run benchmark to compare again Redis/Valkey. I am running Garnet in windows and running memtier_benchmark in WSL with mirrored networking. I have lot of run to run variants. If you see the below 2 benchmark (one is LCS and another is simple GET) then you can see looks like GET is slower than LCS, also the performance of GET is not consistent. If I rerun the test then I am getting different results

Data Load

MSET key1 ohmytext key2 mynewtext

LCS

nirmal@Nirmal-Omen:~$ memtier_benchmark -c 50 -t 14 --command="LCS key1 key2" --hide-histogram --test-time 120 --run-count 3
Writing results to stdout
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
[RUN #1 100%, 120 secs]  0 threads:    14981678 ops,  114772 (avg:  124825) ops/sec, 4.93MB/sec (avg: 5.36MB/sec),  6.09 (avg:  5.60) msec latency

[RUN #2] Preparing benchmark client...
[RUN #2] Launching threads now...
[RUN #2 100%, 120 secs]  0 threads:    14388754 ops,  121305 (avg:  119886) ops/sec, 5.21MB/sec (avg: 5.14MB/sec),  5.76 (avg:  5.83) msec latency

[RUN #3] Preparing benchmark client...
[RUN #3] Launching threads now...
[RUN #3 100%, 120 secs]  0 threads:    15949146 ops,  140447 (avg:  132892) ops/sec, 6.03MB/sec (avg: 5.70MB/sec),  4.98 (avg:  5.26) msec latency

14        Threads
50        Connections per thread
120       Seconds


BEST RUN RESULTS
==================================================================================================
Type         Ops/sec    Avg. Latency     p50 Latency     p99 Latency   p99.9 Latency       KB/sec
--------------------------------------------------------------------------------------------------
Lcss       132892.98         5.26037         5.05500        11.51900        23.93500      5840.02
Totals     132892.98         5.26037         5.05500        11.51900        23.93500     11680.05


WORST RUN RESULTS
==================================================================================================
Type         Ops/sec    Avg. Latency     p50 Latency     p99 Latency   p99.9 Latency       KB/sec
--------------------------------------------------------------------------------------------------
Lcss       119889.36         5.83048         5.56700        12.92700        29.05500      5268.58
Totals     119889.36         5.83048         5.56700        12.92700        29.05500     10537.15


AGGREGATED AVERAGE RESULTS (3 runs)
==================================================================================================
Type         Ops/sec    Avg. Latency     p50 Latency     p99 Latency   p99.9 Latency       KB/sec
--------------------------------------------------------------------------------------------------
Lcss       126120.02         5.55366         5.31100        12.09500        26.11100      5542.38
Totals     126120.02         5.55366         5.31100        12.09500        26.11100     11084.77

GET

nirmal@Nirmal-Omen:~$ memtier_benchmark -c 50 -t 14 --command="GET key1" --hide-histogram --test-time 120 --run-count 3
Writing results to stdout
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
[RUN #1 100%, 120 secs]  0 threads:    16169842 ops,  113057 (avg:  134725) ops/sec, 3.99MB/sec (avg: 4.75MB/sec),  6.18 (avg:  5.19) msec latency

[RUN #2] Preparing benchmark client...
[RUN #2] Launching threads now...
[RUN #2 100%, 120 secs]  0 threads:    10675355 ops,   76615 (avg:   88939) ops/sec, 2.70MB/sec (avg: 3.14MB/sec),  9.11 (avg:  7.86) msec latency

[RUN #3] Preparing benchmark client...
[RUN #3] Launching threads now...
[RUN #3 100%, 120 secs]  0 threads:    10032443 ops,   90045 (avg:   83583) ops/sec, 3.18MB/sec (avg: 2.95MB/sec),  7.77 (avg:  8.36) msec latency

14        Threads
50        Connections per thread
120       Seconds


BEST RUN RESULTS
==================================================================================================
Type         Ops/sec    Avg. Latency     p50 Latency     p99 Latency   p99.9 Latency       KB/sec
--------------------------------------------------------------------------------------------------
Gets       134726.19         5.18910         4.95900        10.30300        19.19900      4868.04
Totals     134726.19         5.18910         4.95900        10.30300        19.19900      9736.07


WORST RUN RESULTS
==================================================================================================
Type         Ops/sec    Avg. Latency     p50 Latency     p99 Latency   p99.9 Latency       KB/sec
--------------------------------------------------------------------------------------------------
Gets        83565.60         8.36044         8.15900        22.52700        46.07900      3019.46
Totals      83565.60         8.36044         8.15900        22.52700        46.07900      6038.92


AGGREGATED AVERAGE RESULTS (3 runs)
==================================================================================================
Type         Ops/sec    Avg. Latency     p50 Latency     p99 Latency   p99.9 Latency       KB/sec
--------------------------------------------------------------------------------------------------
Gets       102476.23         6.82447         6.49500        15.99900        39.67900      3702.75
Totals     102476.23         6.82447         6.49500        15.99900        39.67900      7405.51

@badrishc @TalZaccai Provided the inconsistency with my setup of memtier_benchmark, I am not going to use that for my performance testing of LCS command. I will use BDN.Benchmark of Garnet to perf test the LCS command and check I can do something to optimize.

Feel free to guide me in the right direction if I am doing something wrong with memtier_benchmark. I haven't used memtier_benchmark before, so most definitely I am doing something wrong.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants