Skip to content

Getting Started

Prerequisites

Installation

# Clone the repo
git clone https://github.com/rishic3/cuLSH.git
cd cuLSH/python

# Create a conda env
conda create -n python=3.12 culsh -y
conda activate culsh

# Install the package
pip install .

For development:

pip install -e .
pip install -r requirements_dev.txt

# Rebuild after C++ changes
make clean && make release

Docker

Alternatively, run cuLSH in a container with GPU support:

docker build -t culsh .
docker run --gpus all -it culsh

Mount a local data directory:

docker run --gpus all -it -v $(pwd)/data:/app/data culsh

Note

Requires NVIDIA Container Toolkit for GPU support.

Basic Usage

Numpy Example

import numpy as np
from culsh import PStableLSH

# Numpy inputs
X = np.random.randn(10000, 128).astype(np.float32)
Q = np.random.randn(100, 128).astype(np.float32)

# Fit (returns PStableLSHModel)
model = PStableLSH(n_hash_tables=16, n_hashes=8, seed=42).fit(X)

# Query (returns candidate neighbors)
candidates = model.query(Q)

# Access results
indices = candidates.get_indices()   # Candidate neighbor indices
offsets = candidates.get_offsets()   # Start offset for each query in indices
counts = candidates.get_counts()     # Number of candidates per query

Cupy Example

import cupy as cp
from culsh import PStableLSH

# Cupy inputs
X = cp.random.randn(10000, 128, dtype=cp.float32)
Q = cp.random.randn(100, 128, dtype=cp.float32)

model = PStableLSH(n_hash_tables=16, n_hashes=8).fit(X)
candidates = model.query(Q)

# Access results as cupy arrays
indices = candidates.get_indices(as_cupy=True)
offsets = candidates.get_offsets(as_cupy=True)

Sparse Data

import scipy.sparse  
# import cupyx.scipy.sparse
from culsh import MinHashLSH

X = scipy.sparse.random(10000, 5000, density=0.01, format='csr')
Q = scipy.sparse.random(100, 5000, density=0.01, format='csr')
# Or use cupy:
# X = cupyx.scipy.sparse.random(10000, 5000, density=0.01, format='csr')
# Q = cupyx.scipy.sparse.random(100, 5000, density=0.01, format='csr')

model = MinHashLSH(n_hash_tables=32, n_hashes=4).fit(X)
candidates = model.query(Q)

Simultaneous Fit and Query

Query the same data used for fitting:

candidates = PStableLSH(n_hash_tables=16, n_hashes=8).fit_query(X)

Batched Queries

For large query sets, use batch_size to reduce peak GPU memory:

candidates = model.query(Q, batch_size=1000)

Save and Load

# Save fitted model
model.save("model.npz")

# Load model
from culsh import PStableLSHModel
model = PStableLSHModel.load("model.npz")