So to summarize, the issues I've had have been with packages that both require compiled binaries and are older or not well maintained. ![]() I had to pull out my Linux machine to get that running at all. Another example was when I needed an older version of TensorFlow (1.4) for running an old repository, for which there are no M1 packages available. For example when installing some packages related to Jax, dm-tree was installed, and its binary was broken for osx-arm64, requiring it to be installed through pip as a workaround. Even when they exist, they're often buggy. Any library that comes with binaries needs to be compiled and distributed for M1 (aka osx-arm64), which can take a while to be released, and are often just not available for older packages. I use miniforge3 (which is just miniconda3 with conda-forge as the default channel) a lot and come across issues quite often. It runs under rosetta2 though, and all of the py/conda side of the fence is arm64 native. On the other side of the fence, matlab's graphics performance is AWFUL on M1 and the whole matlab UI occasionally crashes. I have not found any fundamental compatibility problems. This is just conda install numpy from whichever channel it chooses to take it from. I made basically no effort to install conda/numpy this way, beyond setting that environment variable. Numpy shows some vector extensions are used, but not all of them: Supported SIMD extensions in this NumPy install:īaseline = NEON,NEON_FP16,NEON_VFPV4,ASIMD So, MKL is better on intel chips than OpenBLAS is on ARM64. ![]() The dual Xeon 6248R server I ssh into to actually run most of my code does the same in 82ms (~7x faster, with 48 cores instead of 8 P cores). The actual linear algebra part of numpy - matmul, and so on, is multi-core just as it is with the mkl backend (or openblas).Ī=np.random.rand(4096,4096) takes 598 ms on the M1 pro to do the matmul, and all cores light up in htop (including the E-cores). Some cases ( mkl_fft and _rand and so on) are slower on the M1, because no analog using apple accelerate or veclib, as best I can tell, exists. Numerics performance (numpy) is generally about on par with my previous 15" MBP with the last generation of intel i9s apple used. You need the env var CONDA_SUBDIR=osx-arm64 set in your shell rc file or elsewhere to make sure conda only uses native code. I have a 16" M1 Pro machine, and use a commercial license of Conda.Īll of the packages that I particularly care about (numpy, scipy, astropy, skimage. Introduction to Programming with Python (from Microsoft Virtual Academy)./r/git and /r/mercurial - don't forget to put your code in a repo!./r/pyladies (women developers who love python)./r/coolgithubprojects (filtered on Python projects)./r/pystats (python in statistical analysis and machine learning)./r/inventwithpython (for the books written by /u/AlSweigart)./r/pygame (a set of modules designed for writing games)./r/django (web framework for perfectionists with deadlines)./r/pythoncoding (strict moderation policy for 'programming only' articles).NumPy & SciPy (Scientific computing) & Pandas.Transcrypt (Hi res SVG using Python 3.6 and turtle module).Brython (Python 3 implementation for client-side web programming). ![]() PythonAnywhere (basic accounts are free).(Evolved from the language-agnostic parts of IPython, Python 3).The Python Challenge (solve each level through programming).Problem Solving with Algorithms and Data Structures.Invent Your Own Computer Games with Pythonįive life jackets to throw to the new coder (things to do after getting a handle on python) Please use the flair selector to choose your topic.Īdd 4 extra spaces before each line of code def fibonacci(): Reddit filters them out, so your post or comment will be lost. If you are about to ask a "how do I do this in python" question, please try r/learnpython, the Python discord, or the #python IRC channel on Libera.chat. News about the dynamic, interpreted, interactive, object-oriented, extensible programming language Python Current Events
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |