Amazon dsstne - C++
https://github.com/amazon-archives/amazon-dsstne 4.4kAmazon dsstne - C++ Similar Projects List
ITensors.jl
271
A Julia library for efficient tensor computations and tensor network calculations
FastDeploy - C++
149
⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit. ⚡️ FastDeploy 特性 | 安装 | 快速开始 | 社区交流 ⚡️ FastDeploy是一款简单易用的推理部署工具箱。覆盖业界主流优质预训练模型并提供开箱即用的开发体验,包括图像分类、目标检测、图像分割、人脸检测、人体关键点识别、文字识别等多任务,满足开发者多场景,多硬件、多平台的快速部
SweepContractor.jl - Julia
20
Julia package for contraction of tensor networks, based on the sweep line algorithm outlined in the paper "General tensor network decoding of 2D Pauli codes" (arXiv:2101.04125). SweepContractor.jl A Julia package for the contraction of tensor networks using the sweep-line-based contraction algorithm laid out in the paper Gener
Sparsezoo
103
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes. SparseZoo is a constantly-growing repository of highly sparse and sparse-quantized models with matching sparsification recipes for neural networks. It simplifies and accelerates your time-to-value in building performant deep learning models with a collection of inference-optimized models and recipes to prototype from.
Sparse learning - Python
324
Sparse learning library and sparse momentum resources.
Sparse Learning Library and Sparse Momentum Resources
This repo contains a sparse learning library which allows you to wrap any PyTorch neural network with a sparse mask to emulate the training of sparse neural networks. It also
Ray - Python
19.3k
A fast and simple framework for building and running distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.
Ray is a fast and simple framework for building and running distributed applications.
Ray is packaged with the following libraries for accelerating machine learning workloads:
Tune: Scalable Hyperparameter Tuning
RL
FBTT Embedding
154
FBTT-Embedding library provides functionality to compress sparse embedding tables commonly usedin machine learning models such as recommendation and natural language processing.
Pytorch dnc - Python
289
Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch.
Differentiable Neural Computers and family, for Pytorch
Includes:
Differentiable Neural Computers (DNC)
Sparse Access Memory (SAM)
Sparse Differentiable Neural Computers (SDNC)
Install
From source
Arc
MinkowskiEngine
1.5k
Minkowski Engine is an auto-diff neural network library for high-dimensional sparse tensors. The Minkowski Engine is an auto-differentiation library for sparse tensors. It supports all standard neural network layers such as convolution, pooling, unpooling, and broadcasting operations for sparse tensors
Deepsparse
379
CPU inference engine that delivers unprecedented performance for sparse models
ML From Scratch - Python
20.8k
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.
Machine Learning From Scratch
About
Python implementations of some of the fundamental Machine Learning models and algorithms from scratch.
The purpose of this project is not to produce as optimized and computationally
Deep diamond - Clojure
323
A fast Clojure Tensor & Deep Learning library. New books available for subscription Deep Diamond Adopt your pet function and become a patron. Deep Diamond is a Clojure library for fast tensors and
AutoGBT - Python
96
AutoGBT is used for AutoML in a lifelong machine learning setting to classify large volume high cardinality data streams under concept-drift. AutoGBT was developed by a joint team ('autodidact.ai') from Flytxt, Indian Institute of Technology Delhi and CSIR-CEERI as a part of NIPS 2018 AutoML for Lifelong Machine Learning Challenge.
AutoGBT
AutoGBT stands for Automatically Optimized Gradient Boosting Trees, and is used for AutoML in a lifelong machine learning setting to classify large volume high cardinality data streams under concept-drift. AutoGBT was dev
Cheatsheets ai -
14.2k
Essential Cheat Sheets for deep learning and machine learning researchers https://medium.com/@kailashahirwar/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5.
AI Cheatsheets
Essential Cheat Sheets for deep learning and machine learning engineers
Website: https://aicheatsheets.com
Looking for a new job? Take Triplebyte’s quiz and get a job at top companies like Adobe, Dropbox and
Cadence - Go
5.7k
Cadence is a distributed, scalable, durable, and highly available orchestration engine to execute asynchronous long-running business logic in a scalable and resilient way.
Cadence
Cadence is a distributed, scalable, durable, and highly available orchestration engine we developed at Uber Engineering to execute asynchronous long-running business logic in a scalable and resilient way.
Business