get-available-resources
This skill should be used at the start of any computationally intensive scientific task to detect and report available system resources (CPU cores, GPUs, memory, disk space). It creates a JSON file with resource information and strategic recommendations that inform computational approach decisions such as whether to use parallel processing (joblib, multiprocessing), out-of-core computing (Dask, Zarr), GPU acceleration (PyTorch, JAX), or memory-efficient strategies. Use this skill before running analyses, training models, processing large datasets, or any task where resource constraints matter.
Author
Category
Development ToolsInstall
Hot:31
Download and extract to your skills directory
Copy command and send to OpenClaw for auto-install:
Download and install this skill https://openskills.cc/api/download?slug=k-dense-ai-scientific-skills-get-available-resources&locale=en&source=copy
Get Available Resources - System Resource Detection and Scientific Computing Optimization Recommendations
Overview
Automatically detect the computer's CPU, GPU, memory, and disk space, generate a resource report, and provide scientific computing strategy recommendations to help you choose appropriate parallel processing schemes, GPU acceleration libraries, or memory optimization strategies.
Applicable Scenarios
1. Before big data analysis
Before processing GB-scale datasets, first check available memory and disk space to determine whether the data can be loaded into memory or if out-of-core solutions like Dask or Zarr are needed. For example, when analyzing 50GB of genomic data, the feature will tell you whether to use Dask chunking or if you can use pandas directly.
2. Before model training
Before training a neural network, detect whether a GPU is available (NVIDIA CUDA, AMD ROCm, or Apple Silicon Metal), and which backend library (PyTorch, TensorFlow, JAX) and corresponding accelerated build should be used. The feature will recommend the most appropriate GPU acceleration solution based on detected hardware.
3. Before parallel processing
Before using joblib, multiprocessing, or Dask for parallel computation, detect the number of CPU cores to determine the optimal number of workers and avoid performance degradation from over-parallelization. The feature will give specific parallel strategy suggestions based on your CPU core count.
Core Features
1. Automatic hardware detection
One-click detection of complete system hardware configuration: CPU core count (physical/logical), GPU type (NVIDIA/AMD/Apple Silicon), total and available memory, and remaining disk space. Supports GPU detection via nvidia-smi, rocm-smi, and Apple Metal, and automatically recognizes acceleration backends such as PyTorch MPS, TensorFlow Metal, and JAX Metal.
2. Intelligent resource recommendations
Generate customized computation strategy recommendations based on detection results: parallel processing scheme (high/medium/low parallelism), memory strategy (memory-constrained/moderate/ample), GPU acceleration plan (which library to use), and large file handling approaches (streaming/compression/Zarr). All suggestions are based on actual resources to avoid performance issues from blind choices.
3. JSON resource report
Generate a
.claude_resources.json file containing full system information and recommendations, which can be read directly by code. The report includes specific values (e.g., recommend using 6 workers, 8.5GB available memory), making it easy to dynamically adjust computation strategies in Python code based on resource conditions.FAQ
How do I detect whether my computer has a GPU?
Run the detection script and check the
gpu section in the generated JSON file. If there's an NVIDIA GPU, it will appear in the nvidia_gpus array; if it's Apple Silicon (M1/M2/M3/M4), it will appear in the apple_silicon object. The feature will also list available acceleration backends in available_backends, such as CUDA, ROCm, or Metal.How do I know whether a dataset can be loaded into memory?
Check the
available_gb field under memory in the JSON report — this is the currently available memory size. If the dataset size exceeds 50% of available memory, it is recommended to use Dask or Zarr for chunked processing. The feature's memory_strategy recommendation will tell you whether the current status is "memory-constrained" (<4GB), "moderate" (4–16GB), or "ample" (>16GB), and provide corresponding handling strategies.How do I determine how many workers to use for parallel processing?
Check the
recommendations.parallel_processing -> suggested_workers field in the JSON report. The feature will automatically compute the recommended number of workers based on CPU core count: for high parallelism (8+ cores) it suggests core count minus 2; for medium parallelism (4–7 cores) it suggests core count minus 1, to avoid resource contention from over-parallelization. It will also recommend whether to use joblib, multiprocessing, or Dask.