Amazon dsstne - C++https://github.com/amazon-archives/amazon-dsstne 4.4k
Amazon dsstne - C++ Similar Projects List
SweepContractor.jl - Julia 20
Julia package for contraction of tensor networks, based on the sweep line algorithm outlined in the paper "General tensor network decoding of 2D Pauli codes" (arXiv:2101.04125). SweepContractor.jl A Julia package for the contraction of tensor networks using the sweep-line-based contraction algorithm laid out in the paper Gener
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes. SparseZoo is a constantly-growing repository of highly sparse and sparse-quantized models with matching sparsification recipes for neural networks. It simplifies and accelerates your time-to-value in building performant deep learning models with a collection of inference-optimized models and recipes to prototype from.
Sparse learning - Python 324
Sparse learning library and sparse momentum resources.
Sparse Learning Library and Sparse Momentum Resources
This repo contains a sparse learning library which allows you to wrap any PyTorch neural network with a sparse mask to emulate the training of sparse neural networks. It also
Ray - Python 19.3k
A fast and simple framework for building and running distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.
Ray is a fast and simple framework for building and running distributed applications.
Ray is packaged with the following libraries for accelerating machine learning workloads:
Tune: Scalable Hyperparameter Tuning
Pytorch dnc - Python 289
Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch.
Differentiable Neural Computers and family, for Pytorch
Differentiable Neural Computers (DNC)
Sparse Access Memory (SAM)
Sparse Differentiable Neural Computers (SDNC)
Minkowski Engine is an auto-diff neural network library for high-dimensional sparse tensors. The Minkowski Engine is an auto-differentiation library for sparse tensors. It supports all standard neural network layers such as convolution, pooling, unpooling, and broadcasting operations for sparse tensors
ML From Scratch - Python 20.8k
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.
Machine Learning From Scratch
Python implementations of some of the fundamental Machine Learning models and algorithms from scratch.
The purpose of this project is not to produce as optimized and computationally
AutoGBT - Python 96
AutoGBT is used for AutoML in a lifelong machine learning setting to classify large volume high cardinality data streams under concept-drift. AutoGBT was developed by a joint team ('autodidact.ai') from Flytxt, Indian Institute of Technology Delhi and CSIR-CEERI as a part of NIPS 2018 AutoML for Lifelong Machine Learning Challenge.
AutoGBT stands for Automatically Optimized Gradient Boosting Trees, and is used for AutoML in a lifelong machine learning setting to classify large volume high cardinality data streams under concept-drift. AutoGBT was dev
Cheatsheets ai - 14.2k
Essential Cheat Sheets for deep learning and machine learning researchers https://medium.com/@kailashahirwar/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5.
Essential Cheat Sheets for deep learning and machine learning engineers
Looking for a new job? Take Triplebyte’s quiz and get a job at top companies like Adobe, Dropbox and
Cadence - Go 5.7k
Cadence is a distributed, scalable, durable, and highly available orchestration engine to execute asynchronous long-running business logic in a scalable and resilient way.
Cadence is a distributed, scalable, durable, and highly available orchestration engine we developed at Uber Engineering to execute asynchronous long-running business logic in a scalable and resilient way.