What Project should i made to gain knowledge of industry and real life project?
I am just recently completed my btech and i didn't get job. Because of experience . So how can i get a experience?? so suggest project which support my CV
Than
/r/django
https://redd.it/1fmlytf
D Last Week in Medical AI: Top Research Papers/Models 🏅(September 14 - September 21, 2024)
Last Week in Medical AI: Top Research Papers\/Models 🏅\(September 14 - September 21, 2024\)
Medical AI Paper of the Week
How to Build the Virtual Cell with Artificial Intelligence: Priorities and Opportunities
This paper proposes a vision for "AI-powered Virtual Cells," aiming to create robust, data-driven representations of cells and cellular systems. It discusses the potential of AI to generate universal biological representations across scales and facilitate interpretable in-silico experiments using "Virtual Instruments."
Medical LLM & Other Models
GP-GPT: LLMs for Gene-Phenotype Mapping
This paper introduces GP-GPT, the first specialized large language model for genetic-phenotype knowledge representation and genomics relation analysis. Trained on over 3 million terms from genomics, proteomics, and medical genetics datasets and publications.
HuatuoGPT-II, 1-stage Training for Medical LLMs
This paper introduces HuatuoGPT-II, a new large language model (LLM) for Traditional Chinese Medicine, trained using a unified input-output pair format to address data heterogeneity challenges in domain adaptation.
HuatuoGPT-Vision: Multimodal Medical LLMs
This paper introduces PubMedVision, a 1.3 million sample medical VQA dataset created by refining and denoising PubMed image-text pairs using MLLMs (GPT-4V).
Apollo: A Lightweight Multilingual Medical LLM
/r/MachineLearning
https://redd.it/1fmkhok
Sunday Daily Thread: What's everyone working on this week?
# Weekly Thread: What's Everyone Working On This Week? 🛠️
Hello /r/Python! It's time to share what you've been working on! Whether it's a work-in-progress, a completed masterpiece, or just a rough idea, let us know what you're up to!
## How it Works:
1. Show & Tell: Share your current projects, completed works, or future ideas.
2. Discuss: Get feedback, find collaborators, or just chat about your project.
3. Inspire: Your project might inspire someone else, just as you might get inspired here.
## Guidelines:
Feel free to include as many details as you'd like. Code snippets, screenshots, and links are all welcome.
Whether it's your job, your hobby, or your passion project, all Python-related work is welcome here.
## Example Shares:
1. Machine Learning Model: Working on a ML model to predict stock prices. Just cracked a 90% accuracy rate!
2. Web Scraping: Built a script to scrape and analyze news articles. It's helped me understand media bias better.
3. Automation: Automated my home lighting with Python and Raspberry Pi. My life has never been easier!
Let's build and grow together! Share your journey and learn from others. Happy coding! 🌟
/r/Python
https://redd.it/1fmgft6
How do I get the role's permisssion insted of the role_id
So i have this role based access permission. but when i am trying to get the users role it is showing me permission insted.
decorators.py
def permission_required(permission):
print("..")
print(permission)
"""Restrict a view to users with the given permission."""
def decorator(f):
u/wraps(f)
def decorated_function(*args, **kwargs):
if not current_user.is_authenticated:
abort(403)
print(current_user)
permissions = db.session.query(Role.permission).join(Permission.permission).filter_by(permission=permission).first()
print(permission)
print(permissions)
if permission != permissions:
abort(403)
/r/flask
https://redd.it/1fmala1
Deploying flask app for free
Hey,
I wanna know a way to deploy a flask app for free.
My use case is, i wanna allow few of my friends to change the variable for my ML model so that i can get the perfect parameter.
So the hosting service must be able to handle few request and train my ml model with about 50k text inputs at maximum.
/r/flask
https://redd.it/1fm7tbf
D How do researchers in hot topics keep up?
Yesterday night I was reading "Training Language Models to Self-Correct via Reinforcement Learning" (https://arxiv.org/abs/2409.12917) from Deepmind folks, which was released 2 days ago. The paper is about using RL to pre-train LLMs, but that is somehow irrelevant for my question.
The paper is interesting, but while I was reading I wondered: how do they have time to do all that is mentioned there? With this I mean:
- Based on the pretrained models that are used, most likely they only started working on it like 2-3 months ago
- Most references and citations are from the second half of 2024 (from May-June onwards), so less than 3 months old as well
So, during the course of those few months, they had to: read and thoroughly study all cited papers (which are around 45 in this case, and again: most of them are extremely recent), come up with the new idea, develop it, do experiments (which nowadays SFT is not a matter or 15 mins either), compile results, and write the actual paper. And this assumes that they are not concurrently working on other papers/endeavors…
As a solo researcher, I cannot even imagine doing something similar in that period of time, but even with a small
/r/MachineLearning
https://redd.it/1flz1vo
Can’t post my flask website online
Hi, I’m a somewhat experienced coder and I made a website that needed to be in flask to utilize a python library for scraping and to output data on the site. I work often with python but not much with websites so more issues are occurring than I expected.
I’ve been easily able to test and run the website in Pycharm on the local host but am struggling to upload it onto a website online so I can have other people look at it without making them download python and all that.
I’ve tried using python anywhere but the free version gave me a lot of issues and doesn’t offer enough storage for my site to be free. Is there any other free alternatives that aren’t too complicated for hosting?
Also one more note is Im struggling even to upload my pycharm project onto GitHub as “access to this site has been restricted”. So any help info there would be appreciated!
/r/flask
https://redd.it/1fkc4j2
Store last_seen time before user is timeout in flask
Hello guys I need a bit of help with the community.
I have implemented a flask app where I keep track of my users loggin and loggout.
Everytime my user click on login or logout I have the time at which they did.
I also set my session to last 30 seconds to test
permanent_session_lifetime = timedelta(seconds=30)
I want to store the time right before my user is loggout of the session or before the timeout is reached can you help me please because some people forget to logout.
this is what I came up with but it's only working when I keep refreshing the page.
@app.route("/user", methods=["POST","GET"])
def user():
if "user" in session:
user = session["user"]
users = User.query.filter_by(id=User_log.user_id).distinct().count()
Number_of_connection = User_log.query.distinct().count()
user_online = User_log.query.filter_by(status="on").
group_by(User_log.user_id).distinct().count()
/r/flask
https://redd.it/1fkj5su
Hardware requirements for a Flask web app
Hi all,
I am trying to determine the hardware specs that I am gonna need for my Flask app. The basics:
- Flask backend, the frontend is just html templates with bootstrap and tailwind, no framework is used. Some of the pages are loading simple JS scripts, nothing too fancy.
- The application consists of different functionalities, the main one - the users can solve coding exercises in Java, C#, Python or JS. The user writes the code in the application and then the solution is submited, a POST request containing the inputs and the expected outputs(taken from the database) + the actual code is being sent to another service - Piston API(Dockerized) which executes the code with the respective compiler/interpter and returns stdout/stderr/runtime, etc. The Piston API is running separatly from the actual app. In other worlds, the application is something similar to Leetcode, but very simplified version.
- The rest of the functionalities consists mostly of loading dynamic pages reading the user's data from Postgres database. Some of these pages have POST method forms, e.g. the user can update his own profile changing its email address, bio, etc. and executing write operations against the postgres db, but most of the transactions
/r/flask
https://redd.it/1fkkhhj
Read .DBF tables and bring the data to the screen
Hello community, I'm new here and I need your help, I'm making an application that reads a specific table from a database and brings the data to the screen, it searches the table, recognizes it but doesn't bring the data to the screen, Could you give me any suggestions or help me with this?
Hello community, I'm new here and I need your help, I'm making an application that reads a specific table from a database and brings the data to the screen, it searches the table, recognizes it but doesn't bring the data to the screen, Could you give me any suggestions or help me with this?
/r/flask
https://redd.it/1fkrw7l
In need of a kind soul
Hello everyone. I am a complete noob regarding backend trying to just teach myself to make fun projects as a hobby. But I've not been able to deploy any projects or even test it on a local host because no matter what I do the django wont render the templates or atleast that's what I think the problem is since I the page I get is the rocket saying Django is successfully installed and that I am getting that page since something is wrong or the debug = true which it is not. I've changed it a billion times. I've tried to fix my views.py and my urls.py and my settings.py a thousand times nothing works. I am sure that it is something anyone with basic django knowledge would be able to fix in a heartbeat but my flat head can't seem to figure it out. Thank you to anyone that takes the time out of their day to try to help me. I have the link to the directory here through GitHub: https://github.com/KlingstromNicho/TryingGPTIntegration/tree/main/sales\_analysis
/r/djangolearning
https://redd.it/1flj5jg
Get lists from a form?
I want to have the user select colors from a form, then when I do "request.form'colors[']" I'd get a list of the checked/selected options
Ex
print(request.form'colors['])
Would either be a string like
"Red, Blue, Green"
Or
"Red","Blue","Green"
Does this come from the form itself? Or am I supposed to generate a list from a multitude of different fields that get combined in flask?
Is there a best practice for this sort of task?
/r/flask
https://redd.it/1fksxnq
I am finding the official docs of Django Rest Framework Overwhelming
I am constantly trying to grasp the idea about DRF from their docs but I am afraid and overwhelmed by the topics and languages used there. Most of the time when I sit to read certain topic and while reading the topic there comes another topic or feature which is new to me and I click into that link and the cycle repeats and I found myself to be lost.
If you are in the field of DRF, please suggest me how you get confidence at your initial days and what we're the strategies you used to grasp the good understanding over this framework.
Your suggestions would also mean another.
Thank you.
/r/django
https://redd.it/1fl7m3z
Shpuld i go for advance concepts if i am going to work in Ai in future?
Okay so i was learning Machine learning and decided to stop and focus on Django
I have built projects and have good understanding of Django at this point
I wanna ask if i should go for more advance concepts if i am going to work in Machine learning in future
I think i wont be working in Machine learning atleast for an year(will be learning in this year)
So should i learn advance concepts? Like will they help me even if i start working as a ML engineer
It would be good if someone who worked in django and now is working in ML can guide me
Thanks in advance
/r/django
https://redd.it/1fmpc3d
D 4x 4090 vs H100 in JAX
Does anyone have experience with multi-GPU JAX? I know there is this guide which discusses data-parallel, but what if at some point I want to finetune a large model that cannot fit in 24GB (e.g. LLM or large vision model)? Can anyone elaborate on the real-world performance hit for data-sharding and model-sharding in JAX? Would a 4x 4090/5090 setup be significantly slower than an H100 in these cases? Is there a large development-time overhead for model/data sharding in JAX? There also seem to be multiple avenues to achieve parallelism. In practice, do most people tend to use`pmap`, sharding, or some other approach?
My research tends to focus on RL, so I am not sure whether HBM of the H100 is as big as a factor as it is with transformer/LLMs.
/r/MachineLearning
https://redd.it/1fmiunf
firebase firestore populator
Alright, so I had this issue: when I wanted to use algorithms in a Python backend script that would later need Firestore, I didn't know what to do. I would always use the same script that would automatically generate filler data for me in the database. Then, I realized that I could create a package that does this.
So, I created a package that fixes that issue. It has saved me a lot of time recently, and you can do so much more with it. Here's the link: https://pypi.org/project/firebase-populator/
/r/flask
https://redd.it/1fm8fe1
Latent Diffusion in pure-torch (no huggingface dependencies) P
Been fiddling with diffusion for the last year and I decided to release a package with my implementation from scratch of DDPM latent diffusion models. It includes implementations for both the denoising UNet and the VAE+GAN used to embed the image.
It's pure torch, as I find Huggingface diffuser's good for simple tasks but if you want to learn how the inners work or to hack the model a bit, it falls short as the codebase is humongous and not geared towards reusability of components (but I insist is a good library for its purposes). To install it simply runpip install tiny-diff
I aimed to create a reusable implementation, without any ifs in the forward methods (squeezing polymorphism as much as I could so the forward is as clear as possible) and modular components (so if you don't want to use the whole model but parts of it you can grab what you want)
Repo Link: https://github.com/AlejandroBaron/tiny-diff
/r/MachineLearning
https://redd.it/1flxs7d
How can i get better at asking better questions so i can build better software with less meetings
Hey guys I'm a junior developer currently working in banking and I want to know how can i learn to ask better questions so i can take business requirements and make the required software with as little meetings as possible
I believe im a decent developer but i struggle to ask the right questions therefore there is alot of changes that happen throughout the duration of development, ive been fortunate enough to be allowed to build whole features alone but i feel my bad communication will hinder my growth. Any tips
/r/django
https://redd.it/1fllt76
Google chrome crashing when using drf and react
Has anyone else faced this issue when working on drf and react project in vscode.
How to resolve this?
/r/django
https://redd.it/1fm37js
Django vs Laravel
What is something you would only develop with Django and not Laravel, and vice versa?
Edit: Been working with Django for several years but never Laravel so I'm trying to differentiate between the two by example. Thanks
/r/django
https://redd.it/1flx0xd
ParLlama v0.3.8 released. Now supports Ollama, OpenAI, GoogleAI, Anthropic, Groq
# What My project Does:
PAR LLAMA is a powerful TUI (Text User Interface) written in Python and designed for easy management and use of Ollama-based Large Language Models as well as interfacing with online Providers such as Ollama, OpenAI, GoogleAI, Anthropic, Groq
# Key Features:
Easy-to-use interface for interacting with Ollama and cloud hosted LLMs
Dark and Light mode support, plus custom themes
Flexible installation options (uv, pipx, pip or dev mode)
Chat session management
Custom prompt library support
# GitHub and PyPI
PAR LLAMA is under active development and getting new features all the time.
Check out the project on GitHub or for full documentation, installation instructions, and to contribute: [https://github.com/paulrobello/parllama](https://github.com/paulrobello/parllama)
PyPI https://pypi.org/project/parllama/
# Comparison:
I have seem many command line and web applications for interacting with LLM's but have not found any TUI related applications
# Target Audience:
Anybody that loves or wants to love terminal interactions and LLM's
/r/Python
https://redd.it/1fltdi8
Ereddicator v3.1: A Python-based Reddit Content Removal Tool
What My Project Does:
Ereddicator is a Python script that allows you to selectively delete and/or edit your Reddit content: https://github.com/Jelly-Pudding/ereddicator/
Key features include:
Simple GUI
Selective Content Removal: Choose which types of content to delete, including:
Comments
Posts
Saved items
Upvoted content
Downvoted content
Hidden posts
Edit-Only Mode: For comments and posts, you can choose to only edit the content without deleting it. This may be desirable as Reddit is capable of restoring deleted comments.
Karma Threshold: You can set karma thresholds for comments and posts. Content with karma above or equal to the threshold will be preserved.
Subreddit Filtering:
Whitelist: Specify subreddits to exclude from processing.
Blacklist: Specify subreddits to exclusively process, ignoring all others.
Date Range Filtering: Set a specific date range to process content from, allowing you to target content from a particular time period.
Dry Run Mode: Simulate the removal process without actually making any changes. In this mode, Ereddicator will print out what actions would be taken (e.g. what comments and posts will be deleted) without modifying any of your Reddit content.
/r/Python
https://redd.it/1flrphi
Updates to `django-nice` v0.5.0 🙂
Updates to[ version 0.5.0](https://github.com/rexsum420/django-nice)
Description of Updates and Extensions Made to the `django_nice` Library
The recent updates and extensions to the `django_nice` library have significantly increased its flexibility, allowing for more dynamic data binding and supporting complex use cases such as binding multiple fields to a single UI element, handling user-specific data dynamically, and enabling real-time updates with Server-Sent Events (SSE). Below is a detailed breakdown of the changes:
# 1. **Dynamic Binding with dynamic_query**
\*\*Previous Version:\*\*
* The library required a static \`object\_id\` to bind a UI element to a single field of a specific model instance.
\*\*Update:\*\*
* The \`bind\_element\_to\_model\` function now supports \*\*dynamic queries\*\* (\``dynamic_query`\` parameter), which allows model instances to be retrieved dynamically based on any criteria (e.g., a logged-in user's ID, the current high score, etc.).
* \*\*New Features:\*\*
* You can now bind UI elements to objects retrieved dynamically, without needing to know the \``object_id`\` in advance.
* This is useful for scenarios like binding a UI element to the current user's data or retrieving a model instance based on specific business logic (e.g., highest score).
\*\*Example:\*\*
bind_element_to_model(
element,
app_label='people',
model_name='Person',
/r/django
https://redd.it/1flv1to
Get clean markdown from any data source using vision-language models in Python
I have found that quality data preprocessing for LLMs from raw data sources can be an incredibly difficult task, so I'm sharing a new project I began working on this summer to solve this problem.
What My Project Does:
The package in question is an open-source project designed to simplify the process of scraping clean data from various sources (PDFs, URLs, Docs, Images, etc). Whether you're working with PDFs, web pages, or images, it can handle the extraction into a clean markdown format. Unlike traditional scraping tools, it is able to understand the context and layout of documents, thanks to vision-language models. It even handles complex tables and figures.
The beauty of The Pipe is that it's not just a black box. It's open-source so you can peek under the hood, understand how it works, customize it to fit your specific needs, etc. The Python library is quite thoroughly documented for this kind of stuff.
Comparison:
Look at existing Python packages for document scraping such as PyPDF2, Unstructured, PyMuPDF (fitz), PDFMiner, Tabula-py, Camelot, pdfplumber, and marker. While these tools are great at basic text extraction, they often struggle with more complex tasks like handling scanned PDFs, irregular data tables, tables that span multiple pages, and documents
/r/Python
https://redd.it/1fllewz
Saturday Daily Thread: Resource Request and Sharing! Daily Thread
# Weekly Thread: Resource Request and Sharing 📚
Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!
## How it Works:
1. Request: Can't find a resource on a particular topic? Ask here!
2. Share: Found something useful? Share it with the community.
3. Review: Give or get opinions on Python resources you've used.
## Guidelines:
Please include the type of resource (e.g., book, video, article) and the topic.
Always be respectful when reviewing someone else's shared resource.
## Example Shares:
1. Book: "Fluent Python" \- Great for understanding Pythonic idioms.
2. Video: Python Data Structures \- Excellent overview of Python's built-in data structures.
3. Article: Understanding Python Decorators \- A deep dive into decorators.
## Example Requests:
1. Looking for: Video tutorials on web scraping with Python.
2. Need: Book recommendations for Python machine learning.
Share the knowledge, enrich the community. Happy learning! 🌟
/r/Python
https://redd.it/1flq8cn
How to plan a new feature?
Iam a full-stack developer and every time I want to plan a new feature it feels very overwhelming and hard. Although my skills are way beyond this required feature yet I alwayys struggle. I read that I need to break the problem down but I don't know how to start thinking about breaking it.
Can you guys olease tell me if you yave experience how do you plan such feature. And if there are tools that help? Also shall I write pseudo code or it is not always a good idea?
Thanks in advance.
/r/django
https://redd.it/1flagah
Simple Automation Script For Extracting Zip Files
**AutoExtract** is a Python-based tool that monitors a specified folder for ZIP files and automatically extracts them to a designated directory. It keeps track of processed files to avoid duplicate extractions and runs continuously, checking for new ZIP files at regular intervals.
**✅What My Project Does:**
* Monitors a folder for new ZIP files
* Automatically extracts ZIP contents to a specified location
* Keeps track of processed files to prevent redundant extractions
* Customizable folder paths and checking intervals
**✅Target Audience:**
This project is primarily intended for
* **Personal use**: Automate repetitive tasks such as extracting ZIP files from a specified directory.
**✅Comparison**
Compared to existing alternatives like desktop file managers with built-in extraction tools:
* **Simplicity**: Unlike GUI tools, this Python-based approach allows automation without manual intervention.
* **Customization**: Users can modify the folder paths, extraction logic, or check intervals, making it more adaptable than off-the-shelf solutions.
GitHub Link - [https://github.com/pratham2402/AutoExtract](https://github.com/pratham2402/AutoExtract)
/r/Python
https://redd.it/1fl6n3u
I am overwhelmed by the django docs.
I am constantly trying to grasp the idea about DRF from their docs but I am afraid and overwhelmed by the topics and languages used there. Most of the time when I sit to read certain topic and while reading the topic there comes another topic or feature which is new to me and I click into that link and the cycle repeats and I found myself to be lost.
If you are in the field of DRF, please suggest me how you get confidence at your initial days and what we're the strategies you used to grasp the good understanding over this framework.
Your suggestions would also mean another.
Thank you.
/r/djangolearning
https://redd.it/1fl7mm6