µ¿¹æºÏ½º : ¿µ¾îÃ¥¿Â¶óÀμ­Á¡

ÇöÀç À§Ä¡

Ȩ

General Science [ÀϹݰúÇÐ]

È®´ë
SNS °øÀ¯
SNS °øÀ¯Çϱâ

  • ÆäÀ̽ººÏ

  • Æ®À§ÅÍ

  • Ä«Ä«¿ÀÅå
36%¡é
¡Ø ¼öÀÔµµ¼­ÀÇ Æ¯¼º»ó ÁÖ¹®¿Ï·á ÈÄ¿¡µµ Ç°ÀýÀ̳ª ÀýÆÇÀ¸·Î ¿¬¶ôÀ» µå¸± ¼ö ÀÖ½À´Ï´Ù.
Àç°í Á¤È®µµ¸¦ À§ÇØ ³ë·ÂÇÏ°í ÀÖÀ¸´Ï ³Ê±×·¯¿î ¾çÇØ ºÎŹ µå¸³´Ï´Ù :)
¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤¡¤
¡Ø 10ºÎ ÀÌ»ó ´ë·® ÁÖ¹®ÇÏ½Ç °æ¿ì¿¡´Â °í°´¼¾ÅÍ¿¡ È®ÀÎ ÈÄ ÁÖ¹®ÇϽô °Ô ÁÁ½À´Ï´Ù.
1:1¹®Àdzª Ä«Åå ȤÀº ÀüÈ­ ÁÖ½Ã¸é ½Å¼ÓÇÏ°Ô ¾È³»ÇØ µå¸®°Ú½À´Ï´Ù :)

1:1¹®ÀÇ   > Ä«Åå¹®ÀÇ   > ¢Ï 02-3445-1703

The Alignment Problem : Machine Learning and Human Values (Paperback, ¹Ì±¹ÆÇ)

´º¿åŸÀÓ½º Ãßõµµ¼­

  • Á¤°¡
    33,000¿ø
  • ÆǸŰ¡
    21,000¿ø
  • ±¸¸ÅÇýÅÃ

    Àû¸³±Ý : + 420¿ø

  • ¹è¼Ûºñ
    2,500¿ø Á¶°Çº°¹è¼Û
    ±Ý¾×º°¹è¼Ûºñ
    0¿ø ÀÌ»ó ~ 30,000¿ø ¹Ì¸¸ 2,500¿ø
    30,000¿ø ÀÌ»ó ~ 0¿ø

    ¹è¼Ûºñ °è»ê ±âÁØ : ÆǸŰ¡ + ¿É¼Ç°¡ + Ãß°¡»óÇ°°¡ + ÅؽºÆ®¿É¼Ç°¡ - »óÇ°ÇÒÀΰ¡ - »óÇ°ÄíÆùÇÒÀΰ¡

    Áö¿ªÃß°¡¹è¼Ûºñ
    Áö¿ªÃß°¡¹è¼Ûºñ
    ÀÎõ Áß±¸/°­È­/¿ËÁø ¼¶Áö¿ª 4,500 ~ 6,000¿ø
    Ãæ³² ´çÁø/¼­»ê ¼¶Áö¿ª 4,000 ~ 7,000¿ø
    Ãæ³² º¸·É/ÅÂ¾È ¼¶Áö¿ª 5,000¿ø
    °æºÏ ¿ï¸ª±º ÀüÁö¿ª 5,000¿ø
    ºÎ»ê °­¼­±¸ ¼¶Áö¿ª 4,000¿ø
    °æ³² »çõ/Å뿵/°ÅÁ¦ ¼¶Áö¿ª 3,000 ~ 4,000¿ø
    ÀüºÏ ±º»ê/ºÎ¾È ¼¶Áö¿ª 5,000¿ø
    Àü³² ¿©¼ö/Áøµµ/½Å¾È ¼¶Áö¿ª 7,000 ~ 8,000¿ø
    Àü³² ¿Ïµµ/°íÈï/¸ñÆ÷ ¼¶Áö¿ª 5,000 ~ 7,000¿ø
    Àü³² ¿µ±¤/º¸¼º ¼¶Áö¿ª 4,000¿ø
    Á¦ÁÖ ÀüÁö¿ª 3,000¿ø
    Á¦ÁÖ ÃßÀÚ¸é 7,000¿ø
    Á¦ÁÖ ¿ìµµ 6,000¿ø
    ÁÖ¹®½Ã°áÁ¦(¼±°áÁ¦)
  • »óÇ°¹øÈ£
    2697641
  • ISBN / ÄÚµå
    9780393868333
  • ÀÛ°¡
    Brian Christian
  • ÃâÆÇ»ç
    W. W. Norton & Company
  • ÀÛ°¡
  • Ãâ°£ÀÏ
    2021-10-26
  • ±¸¼º/ÆÇÇü
    Paperback | 496 pages
  • Å©±â/Á¤º¸
    139 x 210 x 35mm | 396g

      ÃÑ »óÇ° ±Ý¾×

      0 ¿ø

      (»óÇ°±Ý¾× ¿ø + ±âº»¿É¼Ç ¿ø )
      ¡á Æĺ»[B±Þµµ¼­]´Â ³×À̹öÆäÀÌ ±¸¸Å ºÒ°¡

      ¡á ³×À̹öÆäÀÌ ±¸¸Å½Ã µ¿¹æºÏ½º ÄíÆù/Àû¸³±Ý »ç¿ë ¹× Àû¸³ ºÒ°¡
      ¡á ³×À̹öÆäÀÌ ±¸¸Å½Ã µµ¼­»ê°£Áö¿ª Ãß°¡¹è¼Ûºñ º°µµ(¿¬¶ô¿¹Á¤)

      "If you¡¯re going to read one book on artificial intelligence, this is the one." ¡ªStephen Marche, New York Times

      A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them.

      Today¡¯s ¡°machine-learning¡± systems, trained by data, are so effective that we¡¯ve invited them to see and hear for us¡ªand to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem.

      Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole¡ªand appear to assess Black and White defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And as autonomous vehicles share our streets, we are increasingly putting our lives in their hands.

      The mathematical and computational models driving these changes range in complexity from something that can fit on a spreadsheet to a complex system that might credibly be called ¡°artificial intelligence.¡± They are steadily replacing both human judgment and explicitly programmed software.

      In best-selling author Brian Christian¡¯s riveting account, we meet the alignment problem¡¯s ¡°first-responders,¡± and learn their ambitious plan to solve it before our hands are completely off the wheel. In a masterful blend of history and on-the ground reporting, Christian traces the explosive growth in the field of machine learning and surveys its current, sprawling frontier. Readers encounter a discipline finding its legs amid exhilarating and sometimes terrifying progress. Whether they¡ªand we¡ªsucceed or fail in solving the alignment problem will be a defining human story.

      The Alignment Problem offers an unflinching reckoning with humanity¡¯s biases and blind spots, our own unstated assumptions and often contradictory goals. A dazzlingly interdisciplinary work, it takes a hard look not only at our technology but at our culture¡ªand finds a story by turns harrowing and hopeful.

      ÆòÁ¡ Á¦¸ñ ÀÛ¼ºÀÚ ÀÛ¼ºÀÏ
      µî·ÏµÈ »óÇ°ÈıⰡ ¾ø½À´Ï´Ù.
      ¹øÈ£ Á¦¸ñ ÀÛ¼ºÀÚ ÀÛ¼ºÀÏ ÁøÇà»óȲ
      µî·ÏµÈ »óÇ°¹®ÀÇ°¡ ¾ø½À´Ï´Ù.

      ¹è¼Û¾È³»

      ¡Ø ¹è¼Û±â°£ : °áÁ¦ÀÏ·Î ºÎÅÍ 2 ~ 4 ÀÏ ¼Ò¿ä ( °øÈÞÀÏ Á¦¿Ü)

        - ±¹³»Á¦ÀÛ »çÀÔ»óÇ°, DVD, µµ¼­/»ê°£Áö¿ªÀº 5 ~ 7 ÀÏ ¼Ò¿ä

       

      ¡Ø ¹è¼Ûºñ : 2,500 ¿ø ( ½Ç °áÁ¦±Ý¾× ±âÁØ 3 ¸¸¿øÀÌ»ó ±¸¸Å½Ã ¹«·á )

       -  Á¦ÁÖ ¹× ±âŸ µµ¼­Áö¿ª : µµ¼±·á( ±âº»¹è¼Ûºñ + 3,000 ¿ø ~ 8,000¿ø )

        -  ±ººÎ´ë ¹× Çؿܹè¼ÛÀº Áö¿øµÇÁö ¾Ê½À´Ï´Ù.

       

      ¡Ø Çù·ÂÅùè»ç :  CJ ´ëÇÑÅë¿î  http://www.doortodoor.co.kr/  

      ¢Ï 1588 - 1255 ( ¿ù ~ ±Ý 08:00 ~ 18:00 / Åä 09:00 ~ 13:00 )

      ±³È¯/¹ÝÇ° ¾È³»

      ¡Ø »óÇ° ¼ö·É ÈÄ 7 ÀÏ À̳»¿¡ °í°´¼¾Å͸¦ ÅëÇØ ½ÅûÇÏ½Ç ¼ö ÀÖ½À´Ï´Ù.

          (ÀüÈ­ / Ä«Åå / 1:1¹®ÀÇ )

        - À̺¥Æ® »óÇ°ÀÏ °æ¿ì »çÀºÇ°µµ °°ÀÌ ¹Ý³³ÇØ ÁÖ¼Å¾ß È¯ºÒµË´Ï´Ù.

        - »óÇ°ºÒ·®ÀÎ °æ¿ì ¹è¼Ûºñ¸¦ Æ÷ÇÔÇÑ Àü¾×ÀÌ È¯ºÒµË´Ï´Ù.

       

      ¡Ø Àü »óÇ° ¹ÝÇ°½Ã 5,000 ¿ø ( °í°´ºÎ´ã )

        - ºÎºÐ ¹ÝÇ°½Ã, ÁÖ¹®±Ý¾×ÀÌ 3¸¸¿ø ÀÌ»óÀ̸é 2,500 ¿ø (°í°´ºÎ´ã)

          ÁÖ¹®±Ý¾×ÀÌ 3 ¸¸¿ø ¹Ì¸¸À̸é 5,000 ¿ø (°í°´ºÎ´ã)

       

      ¡Ø ¹ÝÇ° / ±³È¯ ºÒ°¡´ÉÇÑ °æ¿ì

        - Æ÷ÀåµÈ µµ¼­, CD µîÀÇ Æ÷ÀåÀ» °³ºÀ ¹× ÈѼÕÇÑ °æ¿ì,

        - ´Ü±â°£¿¡ Çʵ¶ÀÌ °¡´ÉÇÑ µµ¼­

          ex) CD, ¿©Ç༭, ¸¸È­, ¿ä¸®Ã¥, Áöµµ, »çÁøÁý, ¿öÅ©ºÏ µî

       

      ¡Ø ¹ÝÇ°ÁÖ¼Ò

       - ¼­¿ïƯº°½Ã ¼ºµ¿±¸ ¼º¼öÀÏ·Î 55(SKÅ×Å©³ëºôµù) ÁöÇÏ1Ãþ 101~102È£

       

      À̹ÌÁö È®´ëº¸±â

      The Alignment Problem : Machine Learning and Human Values


      Item size chart »çÀÌÁî ±âÁØÇ¥

      ¡Ø »óÇ°»çÀÌÁî Ä¡¼ö´Â Àç´Â ¹æ¹ý°ú À§Ä¡¿¡ µû¶ó 1~3cm ¿ÀÂ÷°¡ ÀÖÀ» ¼ö ÀÖ½À´Ï´Ù.

      ºñ¹Ð¹øÈ£ È®ÀÎ ´Ý±â