如何自动将数据从url保存到数据库 [英] How to automatically save data from url to database

查看:420
本文介绍了如何自动将数据从url保存到数据库的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试建立一个新闻提要。我建立了一个Ember前端,并建立了一个Rails API后端,他们正在互相交谈。我在后端构建了feedsController,提要模型和迁移。我将数据存储在数据库中,并使用fakerr在Ember前端正确显示。很简单。我想做的是通过关键字将NewsApi中的数据(标题,图像,作者等)加载到我的Rails应用中,保存到我的数据库中,然后在前端显示。如何在Rails应用程序中自动加载和保存此数据?我正在寻找一种自动化的解决方案-1:按关键字加载新闻报道; 2:保存到数据库; 3:显示。我对后端问题的经验有限,但是可以解决我的问题。我没有看到有关此特定问题的任何教程。如果您想查看我的代码,可以将其粘贴,但是文件本质上是带有您的基本索引,创建,销毁等的样板文件。我了解Rails的基础知识,我似乎无法弄清楚。

I'm trying to build a news feed. I have an Ember frontend built and a Rails API backend built and they are talking to each other. I have the feedsController, feeds model, and migration built in the backend. I have data stored in my database and displaying correctly on my Ember frontend using faker. Easy enough. What I'm trying to do is load data (title, image, author, etc) from NewsApi - by keyword - into my rails app, save it to my database, and then display it on my frontend. How do I load and save this data automatically in my rails app? I am looking for an automated solution - 1: load news stories by keyword, 2: save to database, 3: display. I have limited experience with backend problems, but can work my way through. I do not see any tutorials with this specific problem though. If you'd like to see my code, I can paste it in, but the files are essentially boilerplate with your basic index, create, destroy, etc. I understand the basics of Rails, I just can't seem to figure this out. Thanks in advance!

编辑-在过去大约一天的时间里,我取得了一些进展。我现在有来自newsapi的数据,可以在localhost:3000 / feeds中显示它。我的模型:

Edit - I've made some progress over the last day or so. I now have data coming from newsapi and can display it in localhost:3000/feeds. My model:

class Feed < ActiveRecord::Base
 require 'rest_client'

  @url

  def self.getData
    response = RestClient.get(@url, { :content_type => :json })
  end

  def self.retrieve_results
    @url = "https://newsapi.org/v2/everything?q=marijuana&apiKey=#{ENV['NEWS_API_KEY']}"
    JSON.parse(Feed.getData)
  end
end  

和我的控制器:

class FeedsController < ApplicationController
  before_action :set_feed, only: [:show, :create]

  def index
    @feeds = Feed.retrieve_results()
    render json: @feeds
    # redirect_to :action => 'create'  ??  Maybe?
  end

  def create

    I believe I need code here

  end

  private
    def set_feed
      @feed = Feed.find(params[:id])
    end

    def feed_params
      params.require(:feed).permit(:name, :summary, :url, :published_at, :guid)
    end
end  

数据:

{
"status": "ok",
"totalResults": 6988,
"articles": [
{
"source": {
"id": "mashable",
"name": "Mashable"
},
"author": "Lacey Smith",
"title": "Cannabis may alter the genetic makeup of sperm",
"description": "A new study suggests that marijuana use can not only lower sperm count, but also affect sperm DNA. Read more... More about Marijuana, Cannabis, Sperm, Science, and Health",
"url": "https://mashable.com/video/cannabis-sperm-dna/",
"urlToImage": "https://i.amz.mshcdn.com/cdBWehMuVAb4DgU9flYo0lQTyT8=/1200x630/2019%2F01%2F16%2F18%2F1421dae3db754d0a8c4276e524b47f7f.23835.jpg",
"publishedAt": "2019-01-16T20:13:12Z",
"content": null
},
{
"source": {
"id": "the-new-york-times",
"name": "The New York Times"
},
"author": "ALEX BERENSON",
"title": "What Advocates of Legalizing Pot Don’t Want You to Know",
"description": "The wave toward legalization ignores the serious health risks of marijuana.",
"url": "https://www.nytimes.com/2019/01/04/opinion/marijuana-pot-health-risks-legalization.html",
"urlToImage": "https://static01.nyt.com/images/2019/01/04/opinion/04berenson/04berenson-facebookJumbo.jpg",
"publishedAt": "2019-01-05T02:14:00Z",
"content": "Meanwhile, legalization advocates have squelched discussion of the serious mental health risks of marijuana and THC, the chemical responsible for the drugs psychoactive effects. As I have seen firsthand in writing a book about cannabis, anyone who raises thos… [+1428 chars]"
},  

现在,我需要遍历每个提要,取出正确的信息(来源,作者,标题,说明等-基本上都是这些)并将其保存到我自己的数据库列中。当我收到url的响应时自动保存此数据吗?我还需要一种方法来确保只将故事保存一次,然后GET& CREATE将随着新故事的到来而简单地更新数据库。

Now I need to iterate through each feed, pull the correct info out (source, author, title, description, etc - basically all of it) and save it to my own database columns. How do I save this data automatically when I get a response from the url? I also need a way to make certain that I am only saving the story once, and then the GET & CREATE will simply update the database as new stories come in. Again, thanks for any help!

推荐答案

我不确定我是否理解这个问题。您需要使用类似fetch的方法。如果正确设置,使用fetch它应该将新创建的数据推送到rails后端。使用POST是自动保存。您提到尝试加载数据,但是为什么GET不起作用?GET应该加载数据和POST应该发布并保存数据。我来自React和Redux,但您应该可以使用axios之类的东西来简化设置。如果我没有回答您的问题,请告诉我。

I am not sure I understand the question. You need to use something like fetch. If you are using fetch it should push the newly created data to the rails backend if you set it up correctly to do so. Using POST is your autosave. You mentioned trying to load data, but why is GET not working? GET should load the data and POST should post and save the data. I come from React and Redux, but you should be able to use something like axios to make setting it up easier. Let me know if I did not answer your question.

这篇关于如何自动将数据从url保存到数据库的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆